Article Type
Changed
Fri, 09/14/2018 - 12:34
Display Headline
Handless Employees

Round One

“Plane down, mass casualties possible; initiate disaster plan.”

The page interrupted my evening out with friends a few Saturday nights ago. Looking up from my dinner, I noticed the restaurant television had cut away to a news story at Denver International Airport. Continental Flight 1404, en route to Houston, had crashed during takeoff, belly-flopping to a fiery rest a few hundred yards off the runway. The airport is about 10 miles from the nearest hospital—mine.

The situation ended considerably better than originally expected. Thirty-eight people were treated at several Denver hospitals, 11 of them at my hospital, with most patients discharged from the emergency department. No one died. The case remains under review, and little is known about the cause of the crash.

Round Two

“Give me a call. I need to talk to you urgently.”

That page arrived the following Monday morning. It was from a co-worker. There had been an unexpected bad outcome in a young male patient. The hospital’s quality and risk management group had found out about the case and called for a peer review. My colleague was scared; would he be publicly criticized? Punished? Fired?

If we endeavor to fundamentally enhance the safety of hospital care, we must allow providers to openly discuss errors without fear of rebuke.

The patient had been admitted with a chronic disease flare-up. He was on the mend after receiving an increased dose of medication. The night before he was scheduled to be discharged, he developed a new symptom, was evaluated by the cross-cover team, and a plan was set in motion. However, a critical lab result, which became available overnight, mistakenly was not called to the provider and went unnoticed by the primary team that triaged the patient to the end of the team’s rounds. By then, he was in extremis.

Getty Images
Remains of the Boeing 737 that veered off a runway Dec. 20 in Denver. Miraculously, none of the 105 aboard were killed in the accident.

Planes and Patients

The proximity of these two events provoked comparisons.

By now, comparing healthcare to the aviation industry has become cliché. Both industries demand highly trained and skilled conductors; errors in both industries can result in death; both depend on technology; and both have turned to systems engineering to improve efficiencies and reduce mistakes. This is where the two industries diverge, and I think we get it wrong in medicine.

In aviation, there are very proscriptive algorithms that must be followed, and much of a pilot’s work is under constant scrutiny by air traffic controllers and data recorders. A deviation in protocol rarely goes unnoticed. Errors are systematically compiled, scrutinized, and compartmentalized, with the aim of further refining systems to reduce the likelihood of future errors. Although blame is often prescribed, it is in the context of improving the system. Thus, the aviation industry is awash with data to inform and fuel its systems engineering.

Meanwhile, in medicine our indelible sense of autonomy breeds variability, which is not only tolerated, but often goes unnoticed. Further, we employ a model of error analysis that focuses on affixing blame, as if somehow culpability will prevent future errors. Someone made an error, a bad outcome ensued, and the culprit must be identified and punished. This results in reprimand, remediation, or banishment from the medical staff. At times, this is an appropriate response, as some errors are so egregious or indicative of a chronic problem. More often, the punitive process misses the mark because it focuses on blame instead of prevention of the next error. Unlike the aviation industry, this leaves medicine bereft of data for improving our care systems.

 

 

“Blame and Punish” Doesn’t Work

There are two problems with the “blame and punish” approach. First, it is predicated on the belief that providers make errors because they are poorly trained, inept, or just plain careless. Sometimes this is the case.

However, the vast majority of peer reviews that I’ve participated in involved an error performed by extremely well trained, highly skilled clinicians with the highest level of integrity and vigilance. The real problem lies in the human condition.

Humans make mistakes. Always have, always will.

In college, I worked summers in a factory that applied coating to paper. This combined colossal machines spinning at breakneck speeds, huge rolls of paper, and hands—a recipe for handless employees. But accidents rarely happened. Over time, the mill engineers had designed systems so foolproof that the workers couldn’t chop their hands off, even if they wanted to. This level of safety was achieved, in principle, by learning how errors were made so that future errors could be prevented. It was not achieved by blaming handless employees. This paper-plant process recognizes the fallible nature of human beings; it’s the same recognition we need in medicine.

Whether we commit a systems error (e.g., the lab test results arrived after the patient was discharged), a cognitive error (e.g., I continue to believe this pulmonary embolism is pneumonia because my night-coverage partner signed it out as pneumonia), or simply a human error (e.g., the lab forgot to call a critical result to the ordering physician), we work in systems that often result in errors. And the only meaningful hope we have to reduce errors depends on our ability to identify them and build systems so safe that we couldn’t hurt a patient, even if we tried.

This leads to the second problem with the blame-and-punish mentality: It breeds concealment of errors, as providers become reticent to expose mistakes for fear of retribution. Thus, an important pipeline of information about system deficiencies dries up, and we are left to suffer the same cycle of errors.

Budging the quality and patient-safety needle will require a culture that freely and openly admits mistakes in order to analyze and prevent future mistakes. This is inherently difficult for most of us to do, and next to impossible when we fear reprimand. Then again, if we endeavor to fundamentally enhance the safety of hospital care, we must allow providers to openly discuss errors without fear of rebuke. Accomplishing this will require understanding, leadership and action—and it starts with each of us.

Anything short of this will just result in more bad pages. TH

Dr. Glasheen is associate professor of medicine at the University of Colorado Denver, where he serves as director of Hospital Medicine and the Hospitalist Training Program, and as associate program director of the Internal Medicine Residency Program.

Issue
The Hospitalist - 2009(03)
Publications
Sections

Round One

“Plane down, mass casualties possible; initiate disaster plan.”

The page interrupted my evening out with friends a few Saturday nights ago. Looking up from my dinner, I noticed the restaurant television had cut away to a news story at Denver International Airport. Continental Flight 1404, en route to Houston, had crashed during takeoff, belly-flopping to a fiery rest a few hundred yards off the runway. The airport is about 10 miles from the nearest hospital—mine.

The situation ended considerably better than originally expected. Thirty-eight people were treated at several Denver hospitals, 11 of them at my hospital, with most patients discharged from the emergency department. No one died. The case remains under review, and little is known about the cause of the crash.

Round Two

“Give me a call. I need to talk to you urgently.”

That page arrived the following Monday morning. It was from a co-worker. There had been an unexpected bad outcome in a young male patient. The hospital’s quality and risk management group had found out about the case and called for a peer review. My colleague was scared; would he be publicly criticized? Punished? Fired?

If we endeavor to fundamentally enhance the safety of hospital care, we must allow providers to openly discuss errors without fear of rebuke.

The patient had been admitted with a chronic disease flare-up. He was on the mend after receiving an increased dose of medication. The night before he was scheduled to be discharged, he developed a new symptom, was evaluated by the cross-cover team, and a plan was set in motion. However, a critical lab result, which became available overnight, mistakenly was not called to the provider and went unnoticed by the primary team that triaged the patient to the end of the team’s rounds. By then, he was in extremis.

Getty Images
Remains of the Boeing 737 that veered off a runway Dec. 20 in Denver. Miraculously, none of the 105 aboard were killed in the accident.

Planes and Patients

The proximity of these two events provoked comparisons.

By now, comparing healthcare to the aviation industry has become cliché. Both industries demand highly trained and skilled conductors; errors in both industries can result in death; both depend on technology; and both have turned to systems engineering to improve efficiencies and reduce mistakes. This is where the two industries diverge, and I think we get it wrong in medicine.

In aviation, there are very proscriptive algorithms that must be followed, and much of a pilot’s work is under constant scrutiny by air traffic controllers and data recorders. A deviation in protocol rarely goes unnoticed. Errors are systematically compiled, scrutinized, and compartmentalized, with the aim of further refining systems to reduce the likelihood of future errors. Although blame is often prescribed, it is in the context of improving the system. Thus, the aviation industry is awash with data to inform and fuel its systems engineering.

Meanwhile, in medicine our indelible sense of autonomy breeds variability, which is not only tolerated, but often goes unnoticed. Further, we employ a model of error analysis that focuses on affixing blame, as if somehow culpability will prevent future errors. Someone made an error, a bad outcome ensued, and the culprit must be identified and punished. This results in reprimand, remediation, or banishment from the medical staff. At times, this is an appropriate response, as some errors are so egregious or indicative of a chronic problem. More often, the punitive process misses the mark because it focuses on blame instead of prevention of the next error. Unlike the aviation industry, this leaves medicine bereft of data for improving our care systems.

 

 

“Blame and Punish” Doesn’t Work

There are two problems with the “blame and punish” approach. First, it is predicated on the belief that providers make errors because they are poorly trained, inept, or just plain careless. Sometimes this is the case.

However, the vast majority of peer reviews that I’ve participated in involved an error performed by extremely well trained, highly skilled clinicians with the highest level of integrity and vigilance. The real problem lies in the human condition.

Humans make mistakes. Always have, always will.

In college, I worked summers in a factory that applied coating to paper. This combined colossal machines spinning at breakneck speeds, huge rolls of paper, and hands—a recipe for handless employees. But accidents rarely happened. Over time, the mill engineers had designed systems so foolproof that the workers couldn’t chop their hands off, even if they wanted to. This level of safety was achieved, in principle, by learning how errors were made so that future errors could be prevented. It was not achieved by blaming handless employees. This paper-plant process recognizes the fallible nature of human beings; it’s the same recognition we need in medicine.

Whether we commit a systems error (e.g., the lab test results arrived after the patient was discharged), a cognitive error (e.g., I continue to believe this pulmonary embolism is pneumonia because my night-coverage partner signed it out as pneumonia), or simply a human error (e.g., the lab forgot to call a critical result to the ordering physician), we work in systems that often result in errors. And the only meaningful hope we have to reduce errors depends on our ability to identify them and build systems so safe that we couldn’t hurt a patient, even if we tried.

This leads to the second problem with the blame-and-punish mentality: It breeds concealment of errors, as providers become reticent to expose mistakes for fear of retribution. Thus, an important pipeline of information about system deficiencies dries up, and we are left to suffer the same cycle of errors.

Budging the quality and patient-safety needle will require a culture that freely and openly admits mistakes in order to analyze and prevent future mistakes. This is inherently difficult for most of us to do, and next to impossible when we fear reprimand. Then again, if we endeavor to fundamentally enhance the safety of hospital care, we must allow providers to openly discuss errors without fear of rebuke. Accomplishing this will require understanding, leadership and action—and it starts with each of us.

Anything short of this will just result in more bad pages. TH

Dr. Glasheen is associate professor of medicine at the University of Colorado Denver, where he serves as director of Hospital Medicine and the Hospitalist Training Program, and as associate program director of the Internal Medicine Residency Program.

Round One

“Plane down, mass casualties possible; initiate disaster plan.”

The page interrupted my evening out with friends a few Saturday nights ago. Looking up from my dinner, I noticed the restaurant television had cut away to a news story at Denver International Airport. Continental Flight 1404, en route to Houston, had crashed during takeoff, belly-flopping to a fiery rest a few hundred yards off the runway. The airport is about 10 miles from the nearest hospital—mine.

The situation ended considerably better than originally expected. Thirty-eight people were treated at several Denver hospitals, 11 of them at my hospital, with most patients discharged from the emergency department. No one died. The case remains under review, and little is known about the cause of the crash.

Round Two

“Give me a call. I need to talk to you urgently.”

That page arrived the following Monday morning. It was from a co-worker. There had been an unexpected bad outcome in a young male patient. The hospital’s quality and risk management group had found out about the case and called for a peer review. My colleague was scared; would he be publicly criticized? Punished? Fired?

If we endeavor to fundamentally enhance the safety of hospital care, we must allow providers to openly discuss errors without fear of rebuke.

The patient had been admitted with a chronic disease flare-up. He was on the mend after receiving an increased dose of medication. The night before he was scheduled to be discharged, he developed a new symptom, was evaluated by the cross-cover team, and a plan was set in motion. However, a critical lab result, which became available overnight, mistakenly was not called to the provider and went unnoticed by the primary team that triaged the patient to the end of the team’s rounds. By then, he was in extremis.

Getty Images
Remains of the Boeing 737 that veered off a runway Dec. 20 in Denver. Miraculously, none of the 105 aboard were killed in the accident.

Planes and Patients

The proximity of these two events provoked comparisons.

By now, comparing healthcare to the aviation industry has become cliché. Both industries demand highly trained and skilled conductors; errors in both industries can result in death; both depend on technology; and both have turned to systems engineering to improve efficiencies and reduce mistakes. This is where the two industries diverge, and I think we get it wrong in medicine.

In aviation, there are very proscriptive algorithms that must be followed, and much of a pilot’s work is under constant scrutiny by air traffic controllers and data recorders. A deviation in protocol rarely goes unnoticed. Errors are systematically compiled, scrutinized, and compartmentalized, with the aim of further refining systems to reduce the likelihood of future errors. Although blame is often prescribed, it is in the context of improving the system. Thus, the aviation industry is awash with data to inform and fuel its systems engineering.

Meanwhile, in medicine our indelible sense of autonomy breeds variability, which is not only tolerated, but often goes unnoticed. Further, we employ a model of error analysis that focuses on affixing blame, as if somehow culpability will prevent future errors. Someone made an error, a bad outcome ensued, and the culprit must be identified and punished. This results in reprimand, remediation, or banishment from the medical staff. At times, this is an appropriate response, as some errors are so egregious or indicative of a chronic problem. More often, the punitive process misses the mark because it focuses on blame instead of prevention of the next error. Unlike the aviation industry, this leaves medicine bereft of data for improving our care systems.

 

 

“Blame and Punish” Doesn’t Work

There are two problems with the “blame and punish” approach. First, it is predicated on the belief that providers make errors because they are poorly trained, inept, or just plain careless. Sometimes this is the case.

However, the vast majority of peer reviews that I’ve participated in involved an error performed by extremely well trained, highly skilled clinicians with the highest level of integrity and vigilance. The real problem lies in the human condition.

Humans make mistakes. Always have, always will.

In college, I worked summers in a factory that applied coating to paper. This combined colossal machines spinning at breakneck speeds, huge rolls of paper, and hands—a recipe for handless employees. But accidents rarely happened. Over time, the mill engineers had designed systems so foolproof that the workers couldn’t chop their hands off, even if they wanted to. This level of safety was achieved, in principle, by learning how errors were made so that future errors could be prevented. It was not achieved by blaming handless employees. This paper-plant process recognizes the fallible nature of human beings; it’s the same recognition we need in medicine.

Whether we commit a systems error (e.g., the lab test results arrived after the patient was discharged), a cognitive error (e.g., I continue to believe this pulmonary embolism is pneumonia because my night-coverage partner signed it out as pneumonia), or simply a human error (e.g., the lab forgot to call a critical result to the ordering physician), we work in systems that often result in errors. And the only meaningful hope we have to reduce errors depends on our ability to identify them and build systems so safe that we couldn’t hurt a patient, even if we tried.

This leads to the second problem with the blame-and-punish mentality: It breeds concealment of errors, as providers become reticent to expose mistakes for fear of retribution. Thus, an important pipeline of information about system deficiencies dries up, and we are left to suffer the same cycle of errors.

Budging the quality and patient-safety needle will require a culture that freely and openly admits mistakes in order to analyze and prevent future mistakes. This is inherently difficult for most of us to do, and next to impossible when we fear reprimand. Then again, if we endeavor to fundamentally enhance the safety of hospital care, we must allow providers to openly discuss errors without fear of rebuke. Accomplishing this will require understanding, leadership and action—and it starts with each of us.

Anything short of this will just result in more bad pages. TH

Dr. Glasheen is associate professor of medicine at the University of Colorado Denver, where he serves as director of Hospital Medicine and the Hospitalist Training Program, and as associate program director of the Internal Medicine Residency Program.

Issue
The Hospitalist - 2009(03)
Issue
The Hospitalist - 2009(03)
Publications
Publications
Article Type
Display Headline
Handless Employees
Display Headline
Handless Employees
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)