User login
The type II error and black holes
An international group of scientists have announced they have an image of a black hole. This feat of scientific achievement and teamwork is another giant step in humankind’s understanding of the universe. It isn’t easy to find something that isn’t there. Black holes exist and this one is about 6.5 billion times more massive than Earth’s sun. That is a lot of “there.”
In medical research, most articles are about discovering something new. Lately, it is also common to publish studies that claim that something doesn’t exist. No difference is found between treatment A and treatment B. Two decades ago those negative studies rarely were published, but there was merit in the idea that more of them should be published. However, that merit presupposed that the negative studies worthy of publication would be well designed, robust, and, most importantly, contain a power calculation showing that the methodology would have detected the phenomenon if the phenomenon were large enough to be clinically important. Alas, the literature has been flooded with negative studies finding no effect because the studies were hopelessly underpowered and never had a realistic chance of detecting anything. This fake news pollutes our medical knowledge.
To clarify, let me provide a simple example. With my myopia, at 100 yards and without my glasses, I can’t detect the difference between Lebron James and Megan Rapinoe, although I know Megan is better at corner kicks.
Now let me give a second, more complex example that obfuscates the same detection issue. Are there moons circling Jupiter? I go out each night, find Jupiter, take a picture with my trusty cell phone, and examine the picture for any evidence of an object(s) circling the planet. I do this many times. How many? Well, if I only do it three times, people will doubt my science, but doing it 1,000 times would take too long. In my experience, most negative studies seem to involve about 30-50 patients. So one picture a week for a year will produce 52 observations. That is a lot of cold nights under the stars. I will use my scientific knowledge and ability to read sky charts to locate Jupiter. (There is an app for that.) I will use my experience to distinguish Jupiter from Venus and Mars. There will be cloudy days, so maybe only 30 clear pictures will be obtained. I will have a second observer examine the photos. We will calculate a kappa statistic for inter-rater agreement. There will be pictures and tables of numbers. When I’m done, I will publish an article saying that Jupiter doesn’t have moons because I didn’t find any. Trust me, I’m a doctor.
Science doesn’t work that way. Science doesn’t care how smart I am, how dedicated I am, how expensive my cell phone is, or how much work I put into the project, science wants empiric proof. My failure to find moons does not refute their existence. A claim that something does NOT exist cannot be correctly made by simply showing that the P value is greater than .05. A statistically insignificant P value also might also mean that my experiment, despite all my time, effort, commitment, and data collection, is simply inadequate to detect the phenomenon. My cell phone has enough pixels to see Jupiter but not its moons. The phone isn’t powerful enough. My claim is a type II error.
One needs to specify the threshold size of a clinically important effect and then show that your methods and results were powerful enough to have detected something that small. Only then may you correctly publish a conclusion that there is nothing there, a donut hole in the black void of space.
I invite you to do your own survey. As you read journal articles, identify the next 10 times you read a conclusion that claims no effect was found. Scour that article carefully for any indication of the size of effect that those methods and results would have been able to detect. Look for a power calculation. Grade the article with a simple pass/fail on that point. Did the authors provide that information in a way you can understand, or do you just have to trust them? Take President Reagan’s advice, “Trust, but verify.” Most of the 10 articles will lack the calculation and many negative claims are type II errors.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
An international group of scientists have announced they have an image of a black hole. This feat of scientific achievement and teamwork is another giant step in humankind’s understanding of the universe. It isn’t easy to find something that isn’t there. Black holes exist and this one is about 6.5 billion times more massive than Earth’s sun. That is a lot of “there.”
In medical research, most articles are about discovering something new. Lately, it is also common to publish studies that claim that something doesn’t exist. No difference is found between treatment A and treatment B. Two decades ago those negative studies rarely were published, but there was merit in the idea that more of them should be published. However, that merit presupposed that the negative studies worthy of publication would be well designed, robust, and, most importantly, contain a power calculation showing that the methodology would have detected the phenomenon if the phenomenon were large enough to be clinically important. Alas, the literature has been flooded with negative studies finding no effect because the studies were hopelessly underpowered and never had a realistic chance of detecting anything. This fake news pollutes our medical knowledge.
To clarify, let me provide a simple example. With my myopia, at 100 yards and without my glasses, I can’t detect the difference between Lebron James and Megan Rapinoe, although I know Megan is better at corner kicks.
Now let me give a second, more complex example that obfuscates the same detection issue. Are there moons circling Jupiter? I go out each night, find Jupiter, take a picture with my trusty cell phone, and examine the picture for any evidence of an object(s) circling the planet. I do this many times. How many? Well, if I only do it three times, people will doubt my science, but doing it 1,000 times would take too long. In my experience, most negative studies seem to involve about 30-50 patients. So one picture a week for a year will produce 52 observations. That is a lot of cold nights under the stars. I will use my scientific knowledge and ability to read sky charts to locate Jupiter. (There is an app for that.) I will use my experience to distinguish Jupiter from Venus and Mars. There will be cloudy days, so maybe only 30 clear pictures will be obtained. I will have a second observer examine the photos. We will calculate a kappa statistic for inter-rater agreement. There will be pictures and tables of numbers. When I’m done, I will publish an article saying that Jupiter doesn’t have moons because I didn’t find any. Trust me, I’m a doctor.
Science doesn’t work that way. Science doesn’t care how smart I am, how dedicated I am, how expensive my cell phone is, or how much work I put into the project, science wants empiric proof. My failure to find moons does not refute their existence. A claim that something does NOT exist cannot be correctly made by simply showing that the P value is greater than .05. A statistically insignificant P value also might also mean that my experiment, despite all my time, effort, commitment, and data collection, is simply inadequate to detect the phenomenon. My cell phone has enough pixels to see Jupiter but not its moons. The phone isn’t powerful enough. My claim is a type II error.
One needs to specify the threshold size of a clinically important effect and then show that your methods and results were powerful enough to have detected something that small. Only then may you correctly publish a conclusion that there is nothing there, a donut hole in the black void of space.
I invite you to do your own survey. As you read journal articles, identify the next 10 times you read a conclusion that claims no effect was found. Scour that article carefully for any indication of the size of effect that those methods and results would have been able to detect. Look for a power calculation. Grade the article with a simple pass/fail on that point. Did the authors provide that information in a way you can understand, or do you just have to trust them? Take President Reagan’s advice, “Trust, but verify.” Most of the 10 articles will lack the calculation and many negative claims are type II errors.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
An international group of scientists have announced they have an image of a black hole. This feat of scientific achievement and teamwork is another giant step in humankind’s understanding of the universe. It isn’t easy to find something that isn’t there. Black holes exist and this one is about 6.5 billion times more massive than Earth’s sun. That is a lot of “there.”
In medical research, most articles are about discovering something new. Lately, it is also common to publish studies that claim that something doesn’t exist. No difference is found between treatment A and treatment B. Two decades ago those negative studies rarely were published, but there was merit in the idea that more of them should be published. However, that merit presupposed that the negative studies worthy of publication would be well designed, robust, and, most importantly, contain a power calculation showing that the methodology would have detected the phenomenon if the phenomenon were large enough to be clinically important. Alas, the literature has been flooded with negative studies finding no effect because the studies were hopelessly underpowered and never had a realistic chance of detecting anything. This fake news pollutes our medical knowledge.
To clarify, let me provide a simple example. With my myopia, at 100 yards and without my glasses, I can’t detect the difference between Lebron James and Megan Rapinoe, although I know Megan is better at corner kicks.
Now let me give a second, more complex example that obfuscates the same detection issue. Are there moons circling Jupiter? I go out each night, find Jupiter, take a picture with my trusty cell phone, and examine the picture for any evidence of an object(s) circling the planet. I do this many times. How many? Well, if I only do it three times, people will doubt my science, but doing it 1,000 times would take too long. In my experience, most negative studies seem to involve about 30-50 patients. So one picture a week for a year will produce 52 observations. That is a lot of cold nights under the stars. I will use my scientific knowledge and ability to read sky charts to locate Jupiter. (There is an app for that.) I will use my experience to distinguish Jupiter from Venus and Mars. There will be cloudy days, so maybe only 30 clear pictures will be obtained. I will have a second observer examine the photos. We will calculate a kappa statistic for inter-rater agreement. There will be pictures and tables of numbers. When I’m done, I will publish an article saying that Jupiter doesn’t have moons because I didn’t find any. Trust me, I’m a doctor.
Science doesn’t work that way. Science doesn’t care how smart I am, how dedicated I am, how expensive my cell phone is, or how much work I put into the project, science wants empiric proof. My failure to find moons does not refute their existence. A claim that something does NOT exist cannot be correctly made by simply showing that the P value is greater than .05. A statistically insignificant P value also might also mean that my experiment, despite all my time, effort, commitment, and data collection, is simply inadequate to detect the phenomenon. My cell phone has enough pixels to see Jupiter but not its moons. The phone isn’t powerful enough. My claim is a type II error.
One needs to specify the threshold size of a clinically important effect and then show that your methods and results were powerful enough to have detected something that small. Only then may you correctly publish a conclusion that there is nothing there, a donut hole in the black void of space.
I invite you to do your own survey. As you read journal articles, identify the next 10 times you read a conclusion that claims no effect was found. Scour that article carefully for any indication of the size of effect that those methods and results would have been able to detect. Look for a power calculation. Grade the article with a simple pass/fail on that point. Did the authors provide that information in a way you can understand, or do you just have to trust them? Take President Reagan’s advice, “Trust, but verify.” Most of the 10 articles will lack the calculation and many negative claims are type II errors.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
Pseudoscience redux
My most recent column discussed the problem of pseudoscience that pervades some corners of the Internet. Personally, I respond to pseudoscience primarily by trying to provide accurate and less-biased information. I recognize that not everyone approaches decision making by seeking more information. When dealing a diverse public, a medical professional needs to have other approaches in the armamentarium.1 When dealing with other physicians, I am less flexible. Either the profession of medicine believes in science or it doesn’t.
Since that column was published, there have been major developments. There are measles outbreaks in the states of Washington and New York, and more than 100 deaths from a measles epidemic in the Philippines. The World Health Organization has made vaccine hesitancy one of its ten threats to global health in 2019.
Facebook has indicated that it might demote the priority and frequency with which it recommends articles that promulgate anti-vax information and conspiracy theories.2 Facebook isn’t doing this because it has had an epiphany; it has come under pressure for its role in the spread of misinformation. Current legislation was written before the rise of social media, when Internet Service Providers were primarily conduits to transfer bits and bytes between computers. Those ISPs were not liable for the content of the transmitted Web pages. Facebook, by producing what it called a newsfeed and by making personalized suggestions for other websites to browse, doesn’t fit the passive model of an ISP.
For alleged violations of user’s privacy, Facebook might be subject to billion dollar fines, according to a Washington Post article.3 Still, for a company whose revenue is $4 billion per month and whose stock market value is $400 billion, paying a billion dollar fine for years of alleged misbehaviors that have enabled it to become a giant empire is, “in the scheme of things ... a speeding ticket” in the parlance of the penultimate scene of the movie The Social Network. The real financial risk is people deciding they can’t trust the platform and going elsewhere.
Authorities in the United Kingdom in February 2019 released a highly critical, 108-page report about fake news, which said, “Facebook should not be allowed to behave like ‘digital gangsters’ in the online world.”4 The U.K. report urges new regulations to deal with privacy breaches and with fake news. It endeavors to create a duty for social media companies to combat the spread of misinformation.
Then the Wall Street Journal reported that Pinterest has stopped returning results for searches related to vaccination.5 Pinterest realized that most of the shared images on its platform cautioned against vaccination, which contradicts the recommendations of medical experts. Unable to otherwise combat the flow of misinformation, the company apparently has decided to eliminate returning results, pro or con, for any search terms related to vaccines.
While lamenting the public’s inability to distinguish misinformation on the Internet, I’ve also been observing the factors that lead physicians astray. I expect physicians, as trained scientists and as professionals, to be able to assimilate new information and change their practices accordingly. Those who do research on the translation of technology find that, this doesn’t happen with any regularity.
The February 2019 issue of Hospital Pediatrics has four items on the topic of treating bronchiolitis, including two research articles, a brief report, and a commentary. That is obviously a relevant topic this time of year. The impression after reading those four items is that hospitalists don’t really know how to best treat the most common illness they encounter. And even when they “know” how to do it, many factors distort the science. Those factors are highlighted in the article on barriers to minimizing viral testing.6
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
References
1. “Discussing immunization with vaccine-hesitant parents requires caring, individualized approach,” by Jeff Craven, Pediatric News, Nov. 7, 2018; “How do you get anti-vaxxers to vaccinate their kids? Talk to them – for hours,” by Nadine Gartner, Washington Post, Feb. 19, 2019.
2. “Facebook will consider removing or demoting anti-vaccination recommendations amid backlash,” by Taylor Telford, Washington Post, Feb. 15, 2019.
3. “U.S. regulators have met to discuss imposing a record-setting fine against Facebook for privacy violations,” by Tony Romm and Elizabeth Dwoskin, Washington Post, Jan. 18, 2019; “Report: Facebook, FTC discussing ‘multibillion dollar’ fine,” by Associated Press.
4. “Disinformation and ‘fake news’: Final Report,” House of Commons, Feb. 18, 2019, p. 42, item 139.
5. “Pinterest blocks vaccination searches in move to control the conversation,” by Robert McMillan and Daniela Hernandez, The Wall Street Journal, Feb. 20, 2019.
6. “Barriers to minimizing respiratory viral testing in bronchiolitis: Physician perceptions on testing practices,” by MZ Huang et al. Hospital Pediatrics 2019 Feb. doi: 10.1542/hpeds.2018-0108.
My most recent column discussed the problem of pseudoscience that pervades some corners of the Internet. Personally, I respond to pseudoscience primarily by trying to provide accurate and less-biased information. I recognize that not everyone approaches decision making by seeking more information. When dealing a diverse public, a medical professional needs to have other approaches in the armamentarium.1 When dealing with other physicians, I am less flexible. Either the profession of medicine believes in science or it doesn’t.
Since that column was published, there have been major developments. There are measles outbreaks in the states of Washington and New York, and more than 100 deaths from a measles epidemic in the Philippines. The World Health Organization has made vaccine hesitancy one of its ten threats to global health in 2019.
Facebook has indicated that it might demote the priority and frequency with which it recommends articles that promulgate anti-vax information and conspiracy theories.2 Facebook isn’t doing this because it has had an epiphany; it has come under pressure for its role in the spread of misinformation. Current legislation was written before the rise of social media, when Internet Service Providers were primarily conduits to transfer bits and bytes between computers. Those ISPs were not liable for the content of the transmitted Web pages. Facebook, by producing what it called a newsfeed and by making personalized suggestions for other websites to browse, doesn’t fit the passive model of an ISP.
For alleged violations of user’s privacy, Facebook might be subject to billion dollar fines, according to a Washington Post article.3 Still, for a company whose revenue is $4 billion per month and whose stock market value is $400 billion, paying a billion dollar fine for years of alleged misbehaviors that have enabled it to become a giant empire is, “in the scheme of things ... a speeding ticket” in the parlance of the penultimate scene of the movie The Social Network. The real financial risk is people deciding they can’t trust the platform and going elsewhere.
Authorities in the United Kingdom in February 2019 released a highly critical, 108-page report about fake news, which said, “Facebook should not be allowed to behave like ‘digital gangsters’ in the online world.”4 The U.K. report urges new regulations to deal with privacy breaches and with fake news. It endeavors to create a duty for social media companies to combat the spread of misinformation.
Then the Wall Street Journal reported that Pinterest has stopped returning results for searches related to vaccination.5 Pinterest realized that most of the shared images on its platform cautioned against vaccination, which contradicts the recommendations of medical experts. Unable to otherwise combat the flow of misinformation, the company apparently has decided to eliminate returning results, pro or con, for any search terms related to vaccines.
While lamenting the public’s inability to distinguish misinformation on the Internet, I’ve also been observing the factors that lead physicians astray. I expect physicians, as trained scientists and as professionals, to be able to assimilate new information and change their practices accordingly. Those who do research on the translation of technology find that, this doesn’t happen with any regularity.
The February 2019 issue of Hospital Pediatrics has four items on the topic of treating bronchiolitis, including two research articles, a brief report, and a commentary. That is obviously a relevant topic this time of year. The impression after reading those four items is that hospitalists don’t really know how to best treat the most common illness they encounter. And even when they “know” how to do it, many factors distort the science. Those factors are highlighted in the article on barriers to minimizing viral testing.6
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
References
1. “Discussing immunization with vaccine-hesitant parents requires caring, individualized approach,” by Jeff Craven, Pediatric News, Nov. 7, 2018; “How do you get anti-vaxxers to vaccinate their kids? Talk to them – for hours,” by Nadine Gartner, Washington Post, Feb. 19, 2019.
2. “Facebook will consider removing or demoting anti-vaccination recommendations amid backlash,” by Taylor Telford, Washington Post, Feb. 15, 2019.
3. “U.S. regulators have met to discuss imposing a record-setting fine against Facebook for privacy violations,” by Tony Romm and Elizabeth Dwoskin, Washington Post, Jan. 18, 2019; “Report: Facebook, FTC discussing ‘multibillion dollar’ fine,” by Associated Press.
4. “Disinformation and ‘fake news’: Final Report,” House of Commons, Feb. 18, 2019, p. 42, item 139.
5. “Pinterest blocks vaccination searches in move to control the conversation,” by Robert McMillan and Daniela Hernandez, The Wall Street Journal, Feb. 20, 2019.
6. “Barriers to minimizing respiratory viral testing in bronchiolitis: Physician perceptions on testing practices,” by MZ Huang et al. Hospital Pediatrics 2019 Feb. doi: 10.1542/hpeds.2018-0108.
My most recent column discussed the problem of pseudoscience that pervades some corners of the Internet. Personally, I respond to pseudoscience primarily by trying to provide accurate and less-biased information. I recognize that not everyone approaches decision making by seeking more information. When dealing a diverse public, a medical professional needs to have other approaches in the armamentarium.1 When dealing with other physicians, I am less flexible. Either the profession of medicine believes in science or it doesn’t.
Since that column was published, there have been major developments. There are measles outbreaks in the states of Washington and New York, and more than 100 deaths from a measles epidemic in the Philippines. The World Health Organization has made vaccine hesitancy one of its ten threats to global health in 2019.
Facebook has indicated that it might demote the priority and frequency with which it recommends articles that promulgate anti-vax information and conspiracy theories.2 Facebook isn’t doing this because it has had an epiphany; it has come under pressure for its role in the spread of misinformation. Current legislation was written before the rise of social media, when Internet Service Providers were primarily conduits to transfer bits and bytes between computers. Those ISPs were not liable for the content of the transmitted Web pages. Facebook, by producing what it called a newsfeed and by making personalized suggestions for other websites to browse, doesn’t fit the passive model of an ISP.
For alleged violations of user’s privacy, Facebook might be subject to billion dollar fines, according to a Washington Post article.3 Still, for a company whose revenue is $4 billion per month and whose stock market value is $400 billion, paying a billion dollar fine for years of alleged misbehaviors that have enabled it to become a giant empire is, “in the scheme of things ... a speeding ticket” in the parlance of the penultimate scene of the movie The Social Network. The real financial risk is people deciding they can’t trust the platform and going elsewhere.
Authorities in the United Kingdom in February 2019 released a highly critical, 108-page report about fake news, which said, “Facebook should not be allowed to behave like ‘digital gangsters’ in the online world.”4 The U.K. report urges new regulations to deal with privacy breaches and with fake news. It endeavors to create a duty for social media companies to combat the spread of misinformation.
Then the Wall Street Journal reported that Pinterest has stopped returning results for searches related to vaccination.5 Pinterest realized that most of the shared images on its platform cautioned against vaccination, which contradicts the recommendations of medical experts. Unable to otherwise combat the flow of misinformation, the company apparently has decided to eliminate returning results, pro or con, for any search terms related to vaccines.
While lamenting the public’s inability to distinguish misinformation on the Internet, I’ve also been observing the factors that lead physicians astray. I expect physicians, as trained scientists and as professionals, to be able to assimilate new information and change their practices accordingly. Those who do research on the translation of technology find that, this doesn’t happen with any regularity.
The February 2019 issue of Hospital Pediatrics has four items on the topic of treating bronchiolitis, including two research articles, a brief report, and a commentary. That is obviously a relevant topic this time of year. The impression after reading those four items is that hospitalists don’t really know how to best treat the most common illness they encounter. And even when they “know” how to do it, many factors distort the science. Those factors are highlighted in the article on barriers to minimizing viral testing.6
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
References
1. “Discussing immunization with vaccine-hesitant parents requires caring, individualized approach,” by Jeff Craven, Pediatric News, Nov. 7, 2018; “How do you get anti-vaxxers to vaccinate their kids? Talk to them – for hours,” by Nadine Gartner, Washington Post, Feb. 19, 2019.
2. “Facebook will consider removing or demoting anti-vaccination recommendations amid backlash,” by Taylor Telford, Washington Post, Feb. 15, 2019.
3. “U.S. regulators have met to discuss imposing a record-setting fine against Facebook for privacy violations,” by Tony Romm and Elizabeth Dwoskin, Washington Post, Jan. 18, 2019; “Report: Facebook, FTC discussing ‘multibillion dollar’ fine,” by Associated Press.
4. “Disinformation and ‘fake news’: Final Report,” House of Commons, Feb. 18, 2019, p. 42, item 139.
5. “Pinterest blocks vaccination searches in move to control the conversation,” by Robert McMillan and Daniela Hernandez, The Wall Street Journal, Feb. 20, 2019.
6. “Barriers to minimizing respiratory viral testing in bronchiolitis: Physician perceptions on testing practices,” by MZ Huang et al. Hospital Pediatrics 2019 Feb. doi: 10.1542/hpeds.2018-0108.
Responding to pseudoscience
The Internet has been a transformative means of transmitting information. Alas, the information is often not vetted, so the effects on science, truth, and health literacy have been mixed. Unfortunately, Facebook spawned a billion dollar industry that transmits gossip. Twitter distributes information based on celebrity rather than intelligence or expertise.
Listservs and Google groups have allowed small communities to form unrestricted by the physical locations of the members. A listserv for pediatric hospitalists, with 3,800 members, provides quick access to a vast body of knowledge, an extensive array of experience, and insightful clinical wisdom. Discussions on this listserv resource have inspired several of my columns, including this one. The professionalism of the listserv members ensures the accuracy of the messages. Because many of the members work nights, it is possible to post a question and receive five consults from peers, even at 1 a.m. When I first started office practice in rural areas, all I had available was my memory, Rudolph’s Pediatrics textbook, and The Harriet Lane Handbook.
Misinformation has led to vaccine hesitancy and the reemergence of diseases such as measles that had been essentially eliminated. Because people haven’t seen these diseases, they are prone to believing any critique about the risk of vaccines. More recently, parents have been refusing the vitamin K shot that is provided to all newborns to prevent hemorrhagic disease of the newborn, now called vitamin K deficiency bleeding. The incidence of this bleeding disorder is relatively rare. However, when it occurs, the results can be disastrous, with life-threatening gastrointestinal bleeds and disabling brain hemorrhages. As with vaccine hesitancy, the corruption of scientific knowledge has led to bad outcomes that once were nearly eliminated by modern health care.
Part of being a professional is communicating in a manner that helps parents understand small risks. I compare newborn vitamin K deficiency to the risk of driving the newborn around for the first 30 days of life without a car seat. The vast majority of people will not have an accident in that time and their babies will be fine. But emergency department doctors would see so many preventable cases of injury that they would strongly advocate for car seats. I also note that if the baby has a stroke due to vitamin K deficiency, we can’t catch it early and fix it.
One issue that comes up in the nursery is whether the physician should refuse to perform a circumcision on a newborn who has not received vitamin K. The risk of bleeding is increased further when circumcisions are done as outpatient procedures a few days after birth. When this topic was discussed on the hospitalist’s listserv, most respondents took a hard line and would not perform the procedure. I am more ambivalent because of my strong personal value of accommodating diverse views and perhaps because I have never experienced a severe case of postop bleeding. The absolute risk is low.
The ethical issues are similar to those involved in maintaining or dismissing families from your practice panel if they refuse vaccines. Some physicians think the threat of having to find another doctor is the only way to appear credible when advocating the use of vaccines. Actions speak louder than words. Other physicians are dedicated to accommodating diverse viewpoints. They try to persuade over time. This is a complex subject and the American Academy of Pediatrics’ position on this changed 2 years ago to consider dismissal as a viable option as long as it adheres to relevant state laws that prohibit abandonment of patients.1
Respect for science has diminished since the era when men walked on the moon. There are myriad reasons for this. They exceed what can be covered here. All human endeavors wax and wane in their prestige and credibility. The 1960s was an era of great technological progress in many areas, including space flight and medicine. Since then, the credibility of science has been harmed by mercenary scientists who do research not to illuminate truth but to sow doubt.2 This doubt has impeded educating the public about the risks of smoking, lead paint, and climate change.
Physicians themselves have contributed to this diminished credibility of scientists. Recommendations have been published and later withdrawn in areas such as dietary cholesterol, salt, and saturated fats, estrogen replacement therapy, and screening for prostate and breast cancers. In modern America, even small inconsistencies and errors get blown up into conspiracy plots.
The era of expecting patients to blindly follow a doctor’s orders has long since passed. Parents will search the Internet for answers. The modern physician needs to guide them to good ones.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
References
1. Pediatrics. 2016 Aug. doi: 10.1542/peds.2016-2146.
2. “Doubt is Their Product,” by David Michaels, Oxford University Press, 2008, and “Merchants of Doubt,” by Naomi Oreskes and Erik M. Conway, Bloomsbury Press, 2011.
The Internet has been a transformative means of transmitting information. Alas, the information is often not vetted, so the effects on science, truth, and health literacy have been mixed. Unfortunately, Facebook spawned a billion dollar industry that transmits gossip. Twitter distributes information based on celebrity rather than intelligence or expertise.
Listservs and Google groups have allowed small communities to form unrestricted by the physical locations of the members. A listserv for pediatric hospitalists, with 3,800 members, provides quick access to a vast body of knowledge, an extensive array of experience, and insightful clinical wisdom. Discussions on this listserv resource have inspired several of my columns, including this one. The professionalism of the listserv members ensures the accuracy of the messages. Because many of the members work nights, it is possible to post a question and receive five consults from peers, even at 1 a.m. When I first started office practice in rural areas, all I had available was my memory, Rudolph’s Pediatrics textbook, and The Harriet Lane Handbook.
Misinformation has led to vaccine hesitancy and the reemergence of diseases such as measles that had been essentially eliminated. Because people haven’t seen these diseases, they are prone to believing any critique about the risk of vaccines. More recently, parents have been refusing the vitamin K shot that is provided to all newborns to prevent hemorrhagic disease of the newborn, now called vitamin K deficiency bleeding. The incidence of this bleeding disorder is relatively rare. However, when it occurs, the results can be disastrous, with life-threatening gastrointestinal bleeds and disabling brain hemorrhages. As with vaccine hesitancy, the corruption of scientific knowledge has led to bad outcomes that once were nearly eliminated by modern health care.
Part of being a professional is communicating in a manner that helps parents understand small risks. I compare newborn vitamin K deficiency to the risk of driving the newborn around for the first 30 days of life without a car seat. The vast majority of people will not have an accident in that time and their babies will be fine. But emergency department doctors would see so many preventable cases of injury that they would strongly advocate for car seats. I also note that if the baby has a stroke due to vitamin K deficiency, we can’t catch it early and fix it.
One issue that comes up in the nursery is whether the physician should refuse to perform a circumcision on a newborn who has not received vitamin K. The risk of bleeding is increased further when circumcisions are done as outpatient procedures a few days after birth. When this topic was discussed on the hospitalist’s listserv, most respondents took a hard line and would not perform the procedure. I am more ambivalent because of my strong personal value of accommodating diverse views and perhaps because I have never experienced a severe case of postop bleeding. The absolute risk is low.
The ethical issues are similar to those involved in maintaining or dismissing families from your practice panel if they refuse vaccines. Some physicians think the threat of having to find another doctor is the only way to appear credible when advocating the use of vaccines. Actions speak louder than words. Other physicians are dedicated to accommodating diverse viewpoints. They try to persuade over time. This is a complex subject and the American Academy of Pediatrics’ position on this changed 2 years ago to consider dismissal as a viable option as long as it adheres to relevant state laws that prohibit abandonment of patients.1
Respect for science has diminished since the era when men walked on the moon. There are myriad reasons for this. They exceed what can be covered here. All human endeavors wax and wane in their prestige and credibility. The 1960s was an era of great technological progress in many areas, including space flight and medicine. Since then, the credibility of science has been harmed by mercenary scientists who do research not to illuminate truth but to sow doubt.2 This doubt has impeded educating the public about the risks of smoking, lead paint, and climate change.
Physicians themselves have contributed to this diminished credibility of scientists. Recommendations have been published and later withdrawn in areas such as dietary cholesterol, salt, and saturated fats, estrogen replacement therapy, and screening for prostate and breast cancers. In modern America, even small inconsistencies and errors get blown up into conspiracy plots.
The era of expecting patients to blindly follow a doctor’s orders has long since passed. Parents will search the Internet for answers. The modern physician needs to guide them to good ones.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
References
1. Pediatrics. 2016 Aug. doi: 10.1542/peds.2016-2146.
2. “Doubt is Their Product,” by David Michaels, Oxford University Press, 2008, and “Merchants of Doubt,” by Naomi Oreskes and Erik M. Conway, Bloomsbury Press, 2011.
The Internet has been a transformative means of transmitting information. Alas, the information is often not vetted, so the effects on science, truth, and health literacy have been mixed. Unfortunately, Facebook spawned a billion dollar industry that transmits gossip. Twitter distributes information based on celebrity rather than intelligence or expertise.
Listservs and Google groups have allowed small communities to form unrestricted by the physical locations of the members. A listserv for pediatric hospitalists, with 3,800 members, provides quick access to a vast body of knowledge, an extensive array of experience, and insightful clinical wisdom. Discussions on this listserv resource have inspired several of my columns, including this one. The professionalism of the listserv members ensures the accuracy of the messages. Because many of the members work nights, it is possible to post a question and receive five consults from peers, even at 1 a.m. When I first started office practice in rural areas, all I had available was my memory, Rudolph’s Pediatrics textbook, and The Harriet Lane Handbook.
Misinformation has led to vaccine hesitancy and the reemergence of diseases such as measles that had been essentially eliminated. Because people haven’t seen these diseases, they are prone to believing any critique about the risk of vaccines. More recently, parents have been refusing the vitamin K shot that is provided to all newborns to prevent hemorrhagic disease of the newborn, now called vitamin K deficiency bleeding. The incidence of this bleeding disorder is relatively rare. However, when it occurs, the results can be disastrous, with life-threatening gastrointestinal bleeds and disabling brain hemorrhages. As with vaccine hesitancy, the corruption of scientific knowledge has led to bad outcomes that once were nearly eliminated by modern health care.
Part of being a professional is communicating in a manner that helps parents understand small risks. I compare newborn vitamin K deficiency to the risk of driving the newborn around for the first 30 days of life without a car seat. The vast majority of people will not have an accident in that time and their babies will be fine. But emergency department doctors would see so many preventable cases of injury that they would strongly advocate for car seats. I also note that if the baby has a stroke due to vitamin K deficiency, we can’t catch it early and fix it.
One issue that comes up in the nursery is whether the physician should refuse to perform a circumcision on a newborn who has not received vitamin K. The risk of bleeding is increased further when circumcisions are done as outpatient procedures a few days after birth. When this topic was discussed on the hospitalist’s listserv, most respondents took a hard line and would not perform the procedure. I am more ambivalent because of my strong personal value of accommodating diverse views and perhaps because I have never experienced a severe case of postop bleeding. The absolute risk is low.
The ethical issues are similar to those involved in maintaining or dismissing families from your practice panel if they refuse vaccines. Some physicians think the threat of having to find another doctor is the only way to appear credible when advocating the use of vaccines. Actions speak louder than words. Other physicians are dedicated to accommodating diverse viewpoints. They try to persuade over time. This is a complex subject and the American Academy of Pediatrics’ position on this changed 2 years ago to consider dismissal as a viable option as long as it adheres to relevant state laws that prohibit abandonment of patients.1
Respect for science has diminished since the era when men walked on the moon. There are myriad reasons for this. They exceed what can be covered here. All human endeavors wax and wane in their prestige and credibility. The 1960s was an era of great technological progress in many areas, including space flight and medicine. Since then, the credibility of science has been harmed by mercenary scientists who do research not to illuminate truth but to sow doubt.2 This doubt has impeded educating the public about the risks of smoking, lead paint, and climate change.
Physicians themselves have contributed to this diminished credibility of scientists. Recommendations have been published and later withdrawn in areas such as dietary cholesterol, salt, and saturated fats, estrogen replacement therapy, and screening for prostate and breast cancers. In modern America, even small inconsistencies and errors get blown up into conspiracy plots.
The era of expecting patients to blindly follow a doctor’s orders has long since passed. Parents will search the Internet for answers. The modern physician needs to guide them to good ones.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
References
1. Pediatrics. 2016 Aug. doi: 10.1542/peds.2016-2146.
2. “Doubt is Their Product,” by David Michaels, Oxford University Press, 2008, and “Merchants of Doubt,” by Naomi Oreskes and Erik M. Conway, Bloomsbury Press, 2011.
How much more proof do you need?
One piece of wisdom I was given in medical school was to never be the first nor the last to adopt a new treatment. The history of medicine is full of new discoveries that don’t work out as well as the first report. It also is full of long standing dogmas that later were proven false. This balancing act is part of being a professional and being an advocate for your patient. There is science behind this art. Everett Rogers identified innovators, early adopters, and laggards as new ideas are diffused into practice.1
A 2007 French study2 that investigated oral amoxicillin for early-onset group B streptococcal (GBS) disease is one of the few times in the past 3 decades in which I changed my practice based on a single article. It was a large, conclusive study with 222 patients, so it doesn’t need a meta-analysis like American research often requires. The research showed that most of what I had been taught about oral amoxicillin was false. Amoxicillin is absorbed well even at doses above 50 mg/kg per day. It is absorbed reliably by full term neonates, even mildly sick ones. It does adequately cross the blood-brain barrier. The French researchers measured serum levels and proved all this using both scientific principles and through a clinical trial.
I have used this oral protocol (10 days total after 2-3 days IV therapy) on two occasions to treat GBS sepsis when I had informed consent of the parents and buy-in from the primary care pediatrician to be early adopters. I expected the Red Book would update its recommendations. That didn’t happen.
Meanwhile, I have seen other babies kept for 10 days in the hospital for IV therapy with resultant wasted costs (about $20 million/year in the United States) and income loss for the parents. I’ve treated complications and readmissions caused by peripherally inserted central catheter (PICC) line issues. One baby at home got a syringe of gentamicin given as an IV push instead of a normal saline flush. Mistakes happen at home and in the hospital.
Because late-onset GBS can be acquired environmentally, there always will be recurrences. Unless you are practicing defensive medicine, the issue isn’t the rate of recurrence; it is whether the more invasive intervention of prolonged IV therapy reduces that rate. Then balance any measured reduction (which apparently is zero) against the adverse effects of the invasive intervention, such as PICC line infections. This Bayesian decision making is hard for some risk-averse humans to assimilate. (I’m part Borg.)
Coon et al.3 have confirmed, using big data, that prolonged IV therapy of uncomplicated, late-onset GBS bacteremia does not generate a clinically significant benefit. It certainly is possible to sow doubt by asking for proof in a variety of subpopulations. Even in the era of intrapartum antibiotic prophylaxis, which has halved the incidence of GBS disease, GBS disease occurs in about 2,000 babies per year in the United States. However, most are treated in community hospitals and are not included in the database used in this new report. With fewer than 2-3 cases of GBS bacteremia per year per hospital, a multicenter, randomized controlled trial would be an unprecedented undertaking, is ethically problematic, and is not realistically happening soon. So these observational data, skillfully acquired and analyzed, are and will remain the best available data.
This new article is in the context of multiple articles over the past decade that have disproven the myth of the superiority of IV therapy. Given the known risks and costs of PICC lines and prolonged IV therapy, the default should be, absent a credible rationale to the contrary, that oral therapy at home is better.
Coon et al. show that, by 2015, 5 of 49 children’s hospitals (10%) were early adopters and had already made the switch to mostly using short treatment courses for uncomplicated GBS bacteremia; 14 of 49 (29%) hadn’t changed at all from the obsolete Red Book recommendation. Given this new analysis, what are you laggards4 waiting for?
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
References
1. “Diffusion of Innovations,” 5th ed. (New York: Free Press, 2003).
2. Eur J Clin Pharmacol. 2007 Jul;63(7):657-62.
3. Pediatrics. 2018;142(5):e20180345.
4. https://en.wikipedia.org/wiki/Diffusion_of_innovations.
One piece of wisdom I was given in medical school was to never be the first nor the last to adopt a new treatment. The history of medicine is full of new discoveries that don’t work out as well as the first report. It also is full of long standing dogmas that later were proven false. This balancing act is part of being a professional and being an advocate for your patient. There is science behind this art. Everett Rogers identified innovators, early adopters, and laggards as new ideas are diffused into practice.1
A 2007 French study2 that investigated oral amoxicillin for early-onset group B streptococcal (GBS) disease is one of the few times in the past 3 decades in which I changed my practice based on a single article. It was a large, conclusive study with 222 patients, so it doesn’t need a meta-analysis like American research often requires. The research showed that most of what I had been taught about oral amoxicillin was false. Amoxicillin is absorbed well even at doses above 50 mg/kg per day. It is absorbed reliably by full term neonates, even mildly sick ones. It does adequately cross the blood-brain barrier. The French researchers measured serum levels and proved all this using both scientific principles and through a clinical trial.
I have used this oral protocol (10 days total after 2-3 days IV therapy) on two occasions to treat GBS sepsis when I had informed consent of the parents and buy-in from the primary care pediatrician to be early adopters. I expected the Red Book would update its recommendations. That didn’t happen.
Meanwhile, I have seen other babies kept for 10 days in the hospital for IV therapy with resultant wasted costs (about $20 million/year in the United States) and income loss for the parents. I’ve treated complications and readmissions caused by peripherally inserted central catheter (PICC) line issues. One baby at home got a syringe of gentamicin given as an IV push instead of a normal saline flush. Mistakes happen at home and in the hospital.
Because late-onset GBS can be acquired environmentally, there always will be recurrences. Unless you are practicing defensive medicine, the issue isn’t the rate of recurrence; it is whether the more invasive intervention of prolonged IV therapy reduces that rate. Then balance any measured reduction (which apparently is zero) against the adverse effects of the invasive intervention, such as PICC line infections. This Bayesian decision making is hard for some risk-averse humans to assimilate. (I’m part Borg.)
Coon et al.3 have confirmed, using big data, that prolonged IV therapy of uncomplicated, late-onset GBS bacteremia does not generate a clinically significant benefit. It certainly is possible to sow doubt by asking for proof in a variety of subpopulations. Even in the era of intrapartum antibiotic prophylaxis, which has halved the incidence of GBS disease, GBS disease occurs in about 2,000 babies per year in the United States. However, most are treated in community hospitals and are not included in the database used in this new report. With fewer than 2-3 cases of GBS bacteremia per year per hospital, a multicenter, randomized controlled trial would be an unprecedented undertaking, is ethically problematic, and is not realistically happening soon. So these observational data, skillfully acquired and analyzed, are and will remain the best available data.
This new article is in the context of multiple articles over the past decade that have disproven the myth of the superiority of IV therapy. Given the known risks and costs of PICC lines and prolonged IV therapy, the default should be, absent a credible rationale to the contrary, that oral therapy at home is better.
Coon et al. show that, by 2015, 5 of 49 children’s hospitals (10%) were early adopters and had already made the switch to mostly using short treatment courses for uncomplicated GBS bacteremia; 14 of 49 (29%) hadn’t changed at all from the obsolete Red Book recommendation. Given this new analysis, what are you laggards4 waiting for?
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
References
1. “Diffusion of Innovations,” 5th ed. (New York: Free Press, 2003).
2. Eur J Clin Pharmacol. 2007 Jul;63(7):657-62.
3. Pediatrics. 2018;142(5):e20180345.
4. https://en.wikipedia.org/wiki/Diffusion_of_innovations.
One piece of wisdom I was given in medical school was to never be the first nor the last to adopt a new treatment. The history of medicine is full of new discoveries that don’t work out as well as the first report. It also is full of long standing dogmas that later were proven false. This balancing act is part of being a professional and being an advocate for your patient. There is science behind this art. Everett Rogers identified innovators, early adopters, and laggards as new ideas are diffused into practice.1
A 2007 French study2 that investigated oral amoxicillin for early-onset group B streptococcal (GBS) disease is one of the few times in the past 3 decades in which I changed my practice based on a single article. It was a large, conclusive study with 222 patients, so it doesn’t need a meta-analysis like American research often requires. The research showed that most of what I had been taught about oral amoxicillin was false. Amoxicillin is absorbed well even at doses above 50 mg/kg per day. It is absorbed reliably by full term neonates, even mildly sick ones. It does adequately cross the blood-brain barrier. The French researchers measured serum levels and proved all this using both scientific principles and through a clinical trial.
I have used this oral protocol (10 days total after 2-3 days IV therapy) on two occasions to treat GBS sepsis when I had informed consent of the parents and buy-in from the primary care pediatrician to be early adopters. I expected the Red Book would update its recommendations. That didn’t happen.
Meanwhile, I have seen other babies kept for 10 days in the hospital for IV therapy with resultant wasted costs (about $20 million/year in the United States) and income loss for the parents. I’ve treated complications and readmissions caused by peripherally inserted central catheter (PICC) line issues. One baby at home got a syringe of gentamicin given as an IV push instead of a normal saline flush. Mistakes happen at home and in the hospital.
Because late-onset GBS can be acquired environmentally, there always will be recurrences. Unless you are practicing defensive medicine, the issue isn’t the rate of recurrence; it is whether the more invasive intervention of prolonged IV therapy reduces that rate. Then balance any measured reduction (which apparently is zero) against the adverse effects of the invasive intervention, such as PICC line infections. This Bayesian decision making is hard for some risk-averse humans to assimilate. (I’m part Borg.)
Coon et al.3 have confirmed, using big data, that prolonged IV therapy of uncomplicated, late-onset GBS bacteremia does not generate a clinically significant benefit. It certainly is possible to sow doubt by asking for proof in a variety of subpopulations. Even in the era of intrapartum antibiotic prophylaxis, which has halved the incidence of GBS disease, GBS disease occurs in about 2,000 babies per year in the United States. However, most are treated in community hospitals and are not included in the database used in this new report. With fewer than 2-3 cases of GBS bacteremia per year per hospital, a multicenter, randomized controlled trial would be an unprecedented undertaking, is ethically problematic, and is not realistically happening soon. So these observational data, skillfully acquired and analyzed, are and will remain the best available data.
This new article is in the context of multiple articles over the past decade that have disproven the myth of the superiority of IV therapy. Given the known risks and costs of PICC lines and prolonged IV therapy, the default should be, absent a credible rationale to the contrary, that oral therapy at home is better.
Coon et al. show that, by 2015, 5 of 49 children’s hospitals (10%) were early adopters and had already made the switch to mostly using short treatment courses for uncomplicated GBS bacteremia; 14 of 49 (29%) hadn’t changed at all from the obsolete Red Book recommendation. Given this new analysis, what are you laggards4 waiting for?
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
References
1. “Diffusion of Innovations,” 5th ed. (New York: Free Press, 2003).
2. Eur J Clin Pharmacol. 2007 Jul;63(7):657-62.
3. Pediatrics. 2018;142(5):e20180345.
4. https://en.wikipedia.org/wiki/Diffusion_of_innovations.
Promoting confrontation
The optimist says the glass is half-full. The pessimist says it is half-empty. An engineer says the glass is twice as large as needed to contain the specified amount of fluid. To some people, that mindset makes engineers negative people. We focus on weaknesses and inefficiencies. A chain is only as strong as its weakest link. There is no partial credit when building a bridge. 98% right is still wrong.
When I worked as an engineer, critiquing ideas was a daily activity. I am used to conflicting opinions. Industry trains people to be professional and act appropriately when disagreeing with a colleague. Tact is the art of making a point without making an enemy. Engineering has a strong culture of focusing on a problem rather than on personalities. Upper management made it clear that in any turf war, both sides will lose. Academia has a different culture. Turf wars in academia are so bitter because the stakes are so small.
Pediatrics has less confrontation and competitiveness than do other subspecialties. That makes the work environment more pleasant, as long as every other group in the hospital isn’t walking all over you. Pediatricians often view themselves as dedicated to doing what is right for the children, even to the point of martyrdom. Some early pediatric hospitalist programs got into economic trouble because they adopted tasks that benefited the children but that weren’t being performed by other physicians precisely because those tasks were neither valued nor compensated. Learning to say “No” is hard but necessary.
As a clinical ethics consultant, I was consulted when conflict had developed between providers and patients/parents or between different specialties. Ethics consults are rarely about what philosophers would call ethics. They are mostly about miscommunication, empowering voices to be heard and clarifying values. Practical skills in de-escalation and mediation are more important than either law or philosophy degrees.
There are downsides to avoiding confrontation. Truth suffers. Integrity is lost. Goals become corrupted. I will give two examples. One ED had a five-level triage system. Level 1 was reserved for life-threatening situations such as gunshot wounds and resuscitations. So I was surprised to see a “bili” baby triaged at Level 1. He was a good baby with normal vitals. Admission for phototherapy was reasonable, but the urgency of a bilirubin of 19 did not match that of a gunshot wound. A colleague warned me not to even consider challenging the practice. A powerful physician at that institution had made it policy years earlier.
I witnessed a similar dynamic many times at that institution. Residents are even better than 4-year-olds at noticing hypocritical behavior. Once they perceive that the dynamic is political power and not science, they adapt quickly. A couple days later, I asked a resident if he really thought an IV was necessary for a toddler we were admitting. He replied no, but if he hadn’t put an IV in, the hospital wouldn’t get paid for the admission. To him, that was the unspoken policy. The action didn’t even cause him moral distress. I worry about that much cynicism so early in a career. Cognitive dissonance starts small and slowly creeps its way into everything.
The art of managing conflict is particularly important in pediatric hospital medicine because of its heavy investment in reducing overdiagnosis and overtreatment. Many pediatric hospitalists are located at academic institutions and more subject to its turf wars than outpatient colleagues practicing in small groups. The recent conference for pediatric hospital medicine was held in Atlanta, a few blocks from the Center for Civil and Human Rights. That museum evokes powerful images of struggles around the world. My two takeaway lessons: Silence is a form of collaboration. Tyrannical suppression of dissent magnifies suffering.
In poorly managed academic institutions, it can be harmful to one’s career to ask questions, challenge assumptions, and seek truth. A recent report found that the Department of Veterans Affairs health system also has a culture that punishes whistle-blowers. Nationally, politics has become polarized. Splitting, once considered a dysfunctional behavior, has become normalized. So I understand the reluctance to speak up. One must choose one’s battles.
Given the personal and career risks, why confront inaccurate research, wasteful practices, and unjust policies? I believe that there is a balance and a choice each person must make. Canadian engineers wear an iron ring to remind themselves of their professional responsibilities. Doctors wear white coats. Personally, I share a memory with other engineers of my generation. In January 1986, NASA engineers could not convince their managers about a risk. The space shuttle Challenger exploded. I heard about it in the medical school’s cafeteria. So for me, disputation is part of the vocation.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
The optimist says the glass is half-full. The pessimist says it is half-empty. An engineer says the glass is twice as large as needed to contain the specified amount of fluid. To some people, that mindset makes engineers negative people. We focus on weaknesses and inefficiencies. A chain is only as strong as its weakest link. There is no partial credit when building a bridge. 98% right is still wrong.
When I worked as an engineer, critiquing ideas was a daily activity. I am used to conflicting opinions. Industry trains people to be professional and act appropriately when disagreeing with a colleague. Tact is the art of making a point without making an enemy. Engineering has a strong culture of focusing on a problem rather than on personalities. Upper management made it clear that in any turf war, both sides will lose. Academia has a different culture. Turf wars in academia are so bitter because the stakes are so small.
Pediatrics has less confrontation and competitiveness than do other subspecialties. That makes the work environment more pleasant, as long as every other group in the hospital isn’t walking all over you. Pediatricians often view themselves as dedicated to doing what is right for the children, even to the point of martyrdom. Some early pediatric hospitalist programs got into economic trouble because they adopted tasks that benefited the children but that weren’t being performed by other physicians precisely because those tasks were neither valued nor compensated. Learning to say “No” is hard but necessary.
As a clinical ethics consultant, I was consulted when conflict had developed between providers and patients/parents or between different specialties. Ethics consults are rarely about what philosophers would call ethics. They are mostly about miscommunication, empowering voices to be heard and clarifying values. Practical skills in de-escalation and mediation are more important than either law or philosophy degrees.
There are downsides to avoiding confrontation. Truth suffers. Integrity is lost. Goals become corrupted. I will give two examples. One ED had a five-level triage system. Level 1 was reserved for life-threatening situations such as gunshot wounds and resuscitations. So I was surprised to see a “bili” baby triaged at Level 1. He was a good baby with normal vitals. Admission for phototherapy was reasonable, but the urgency of a bilirubin of 19 did not match that of a gunshot wound. A colleague warned me not to even consider challenging the practice. A powerful physician at that institution had made it policy years earlier.
I witnessed a similar dynamic many times at that institution. Residents are even better than 4-year-olds at noticing hypocritical behavior. Once they perceive that the dynamic is political power and not science, they adapt quickly. A couple days later, I asked a resident if he really thought an IV was necessary for a toddler we were admitting. He replied no, but if he hadn’t put an IV in, the hospital wouldn’t get paid for the admission. To him, that was the unspoken policy. The action didn’t even cause him moral distress. I worry about that much cynicism so early in a career. Cognitive dissonance starts small and slowly creeps its way into everything.
The art of managing conflict is particularly important in pediatric hospital medicine because of its heavy investment in reducing overdiagnosis and overtreatment. Many pediatric hospitalists are located at academic institutions and more subject to its turf wars than outpatient colleagues practicing in small groups. The recent conference for pediatric hospital medicine was held in Atlanta, a few blocks from the Center for Civil and Human Rights. That museum evokes powerful images of struggles around the world. My two takeaway lessons: Silence is a form of collaboration. Tyrannical suppression of dissent magnifies suffering.
In poorly managed academic institutions, it can be harmful to one’s career to ask questions, challenge assumptions, and seek truth. A recent report found that the Department of Veterans Affairs health system also has a culture that punishes whistle-blowers. Nationally, politics has become polarized. Splitting, once considered a dysfunctional behavior, has become normalized. So I understand the reluctance to speak up. One must choose one’s battles.
Given the personal and career risks, why confront inaccurate research, wasteful practices, and unjust policies? I believe that there is a balance and a choice each person must make. Canadian engineers wear an iron ring to remind themselves of their professional responsibilities. Doctors wear white coats. Personally, I share a memory with other engineers of my generation. In January 1986, NASA engineers could not convince their managers about a risk. The space shuttle Challenger exploded. I heard about it in the medical school’s cafeteria. So for me, disputation is part of the vocation.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
The optimist says the glass is half-full. The pessimist says it is half-empty. An engineer says the glass is twice as large as needed to contain the specified amount of fluid. To some people, that mindset makes engineers negative people. We focus on weaknesses and inefficiencies. A chain is only as strong as its weakest link. There is no partial credit when building a bridge. 98% right is still wrong.
When I worked as an engineer, critiquing ideas was a daily activity. I am used to conflicting opinions. Industry trains people to be professional and act appropriately when disagreeing with a colleague. Tact is the art of making a point without making an enemy. Engineering has a strong culture of focusing on a problem rather than on personalities. Upper management made it clear that in any turf war, both sides will lose. Academia has a different culture. Turf wars in academia are so bitter because the stakes are so small.
Pediatrics has less confrontation and competitiveness than do other subspecialties. That makes the work environment more pleasant, as long as every other group in the hospital isn’t walking all over you. Pediatricians often view themselves as dedicated to doing what is right for the children, even to the point of martyrdom. Some early pediatric hospitalist programs got into economic trouble because they adopted tasks that benefited the children but that weren’t being performed by other physicians precisely because those tasks were neither valued nor compensated. Learning to say “No” is hard but necessary.
As a clinical ethics consultant, I was consulted when conflict had developed between providers and patients/parents or between different specialties. Ethics consults are rarely about what philosophers would call ethics. They are mostly about miscommunication, empowering voices to be heard and clarifying values. Practical skills in de-escalation and mediation are more important than either law or philosophy degrees.
There are downsides to avoiding confrontation. Truth suffers. Integrity is lost. Goals become corrupted. I will give two examples. One ED had a five-level triage system. Level 1 was reserved for life-threatening situations such as gunshot wounds and resuscitations. So I was surprised to see a “bili” baby triaged at Level 1. He was a good baby with normal vitals. Admission for phototherapy was reasonable, but the urgency of a bilirubin of 19 did not match that of a gunshot wound. A colleague warned me not to even consider challenging the practice. A powerful physician at that institution had made it policy years earlier.
I witnessed a similar dynamic many times at that institution. Residents are even better than 4-year-olds at noticing hypocritical behavior. Once they perceive that the dynamic is political power and not science, they adapt quickly. A couple days later, I asked a resident if he really thought an IV was necessary for a toddler we were admitting. He replied no, but if he hadn’t put an IV in, the hospital wouldn’t get paid for the admission. To him, that was the unspoken policy. The action didn’t even cause him moral distress. I worry about that much cynicism so early in a career. Cognitive dissonance starts small and slowly creeps its way into everything.
The art of managing conflict is particularly important in pediatric hospital medicine because of its heavy investment in reducing overdiagnosis and overtreatment. Many pediatric hospitalists are located at academic institutions and more subject to its turf wars than outpatient colleagues practicing in small groups. The recent conference for pediatric hospital medicine was held in Atlanta, a few blocks from the Center for Civil and Human Rights. That museum evokes powerful images of struggles around the world. My two takeaway lessons: Silence is a form of collaboration. Tyrannical suppression of dissent magnifies suffering.
In poorly managed academic institutions, it can be harmful to one’s career to ask questions, challenge assumptions, and seek truth. A recent report found that the Department of Veterans Affairs health system also has a culture that punishes whistle-blowers. Nationally, politics has become polarized. Splitting, once considered a dysfunctional behavior, has become normalized. So I understand the reluctance to speak up. One must choose one’s battles.
Given the personal and career risks, why confront inaccurate research, wasteful practices, and unjust policies? I believe that there is a balance and a choice each person must make. Canadian engineers wear an iron ring to remind themselves of their professional responsibilities. Doctors wear white coats. Personally, I share a memory with other engineers of my generation. In January 1986, NASA engineers could not convince their managers about a risk. The space shuttle Challenger exploded. I heard about it in the medical school’s cafeteria. So for me, disputation is part of the vocation.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com.
Significant figures: The honesty in being precise
Physicists have strict rules about significant figures. Medical journals lack this professional discipline and it produces distortions that mislead readers.
Whenever you measure and report something in physics, the precision of the measurement is reflected in how the value is written. Writing a result with more digits implies that a higher precision was achieved. If that wasn’t the case, you are falsely claiming skill and accomplishment. You’ve entered the zone of post-truth.
This point was taught by my high school physics teacher, Mr. Gunnar Overgaard, may he rest in peace. Suppose we measured the length of the lab table with the meter stick. We repeated the action three times. We computed an average. Our table was 243.7 cm long. If we wrote 243.73 or 243.73333 we got a lower grade. Meter sticks only have markings of 0.1 cm. So the precision of the reported measurement should properly reflect that limitation.
Researchers in medicine seem to have skipped that lesson in physics lab. In medical journals, the default seems to be to report measurements with two decimal points, such as 16.67%, which is a gross distortion of the precision when I know that that really means 2 out of 12 patients had the finding.
This issue of precision came up recently in two papers published about the number of deaths caused by Hurricane Maria in Puerto Rico. The official death toll was 64. This number became a political hot potato when President Trump cited it as if it was evidence that he and the current local government had managed the emergency response better than George W. Bush did for Katrina.
On May 29, 2018, some researchers at the Harvard School of Public Health, a prestigious institution, published an article in The New England Journal of Medicine, a prestigious journal. You would presume that pair could report properly. The abstract said “This rate yielded a total of 4,645 excess deaths during this period (95% CI, 793 to 8,498).”1 Many newspapers published the number 4,645 in a headline. Most newspapers didn’t include all of the scientific mumbo jumbo about bias and confidence intervals.
However, the number 4,645 did not pass the sniff test at many newspapers, including the Washington Post. Their headline began “Harvard study estimates thousands died”2 and that story went on to clarify that “The Harvard study’s statistical analysis found that deaths related to the hurricane fell within a range of about 800 to more than 8,000.” That is one significant digit. Then the fact checkers went to work on it. They didn’t issue a Pinocchio score, but under a headline of “Did exactly 4,645 people die in Hurricane Maria? Nope”3 the fact checkers concluded that “it’s an egregious example of false precision to cite the ‘4,645’ number without explaining how fuzzy the number really is.”
The situation was compounded 3 days later when another news report had the Puerto Rico Department of Public Health putting the death toll at 1,397. Many assumptions go into determining what an excess death is. If the false precision makes it appear the scientists have a political agenda, it casts shade on whether the assumptions they made are objective and unbiased.
The result on social media was predictable. Outrage was expressed, as always. Lawsuits have been filed. The reputations of all scientists have been impugned. The implication is that, depending on your political polarization, you can choose the number 64, 1,000, 1,400, or 4,645 and any number is just as true as another. Worse, instead of focusing on the severity of the catastrophe and how we might have responded better then and better now and with better planning for the future, the debate has focused on alternative facts and fake scientific news. Thanks, Harvard.
So in the spirit of thinking globally but acting locally, what can I do? I love my editor. I have hinted before about how much easier it is to read, as well as more accurate scientifically, to round the numbers that we report. We've done it a few times recently, but now that the Washington Post has done it on a major news story, should this practice become the norm for journalism? If medical journal editors won't handle precision honestly, other journalists must step up. I'm distressed when I review an article that says 14.6% agreed and 79.2% strongly agreed and I know those percentages with 3 digits really mean 7/48 and 38/48, so they should be rounded to two significant figures. And isn’t it easier to read and comprehend if reporting that three treatment groups had positive findings of 4.25%, 12.08%, and 9.84% when rounded to 4%, 12%, and 10%?
Scientists using this false precision (and peer reviewers who allow it) need to be corrected. They are trying to sell their research as a Louis Vuitton handbag when we all know it is only a cheap knockoff.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com
References
1. N Eng J Med. 2018 May 29. doi: 10.1056/NEJMsa1803972
2. “Harvard study estimates thousands died in Puerto Rico because of Hurricane Maria,” by Arelis R. Hernández and Laurie McGinley, The Washington Post, May 29, 2018.
3. “Did exactly 4,645 people die in Hurricane Maria? Nope.” by Glenn Kessler, The Washington Post, June 1, 2018.
Physicists have strict rules about significant figures. Medical journals lack this professional discipline and it produces distortions that mislead readers.
Whenever you measure and report something in physics, the precision of the measurement is reflected in how the value is written. Writing a result with more digits implies that a higher precision was achieved. If that wasn’t the case, you are falsely claiming skill and accomplishment. You’ve entered the zone of post-truth.
This point was taught by my high school physics teacher, Mr. Gunnar Overgaard, may he rest in peace. Suppose we measured the length of the lab table with the meter stick. We repeated the action three times. We computed an average. Our table was 243.7 cm long. If we wrote 243.73 or 243.73333 we got a lower grade. Meter sticks only have markings of 0.1 cm. So the precision of the reported measurement should properly reflect that limitation.
Researchers in medicine seem to have skipped that lesson in physics lab. In medical journals, the default seems to be to report measurements with two decimal points, such as 16.67%, which is a gross distortion of the precision when I know that that really means 2 out of 12 patients had the finding.
This issue of precision came up recently in two papers published about the number of deaths caused by Hurricane Maria in Puerto Rico. The official death toll was 64. This number became a political hot potato when President Trump cited it as if it was evidence that he and the current local government had managed the emergency response better than George W. Bush did for Katrina.
On May 29, 2018, some researchers at the Harvard School of Public Health, a prestigious institution, published an article in The New England Journal of Medicine, a prestigious journal. You would presume that pair could report properly. The abstract said “This rate yielded a total of 4,645 excess deaths during this period (95% CI, 793 to 8,498).”1 Many newspapers published the number 4,645 in a headline. Most newspapers didn’t include all of the scientific mumbo jumbo about bias and confidence intervals.
However, the number 4,645 did not pass the sniff test at many newspapers, including the Washington Post. Their headline began “Harvard study estimates thousands died”2 and that story went on to clarify that “The Harvard study’s statistical analysis found that deaths related to the hurricane fell within a range of about 800 to more than 8,000.” That is one significant digit. Then the fact checkers went to work on it. They didn’t issue a Pinocchio score, but under a headline of “Did exactly 4,645 people die in Hurricane Maria? Nope”3 the fact checkers concluded that “it’s an egregious example of false precision to cite the ‘4,645’ number without explaining how fuzzy the number really is.”
The situation was compounded 3 days later when another news report had the Puerto Rico Department of Public Health putting the death toll at 1,397. Many assumptions go into determining what an excess death is. If the false precision makes it appear the scientists have a political agenda, it casts shade on whether the assumptions they made are objective and unbiased.
The result on social media was predictable. Outrage was expressed, as always. Lawsuits have been filed. The reputations of all scientists have been impugned. The implication is that, depending on your political polarization, you can choose the number 64, 1,000, 1,400, or 4,645 and any number is just as true as another. Worse, instead of focusing on the severity of the catastrophe and how we might have responded better then and better now and with better planning for the future, the debate has focused on alternative facts and fake scientific news. Thanks, Harvard.
So in the spirit of thinking globally but acting locally, what can I do? I love my editor. I have hinted before about how much easier it is to read, as well as more accurate scientifically, to round the numbers that we report. We've done it a few times recently, but now that the Washington Post has done it on a major news story, should this practice become the norm for journalism? If medical journal editors won't handle precision honestly, other journalists must step up. I'm distressed when I review an article that says 14.6% agreed and 79.2% strongly agreed and I know those percentages with 3 digits really mean 7/48 and 38/48, so they should be rounded to two significant figures. And isn’t it easier to read and comprehend if reporting that three treatment groups had positive findings of 4.25%, 12.08%, and 9.84% when rounded to 4%, 12%, and 10%?
Scientists using this false precision (and peer reviewers who allow it) need to be corrected. They are trying to sell their research as a Louis Vuitton handbag when we all know it is only a cheap knockoff.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com
References
1. N Eng J Med. 2018 May 29. doi: 10.1056/NEJMsa1803972
2. “Harvard study estimates thousands died in Puerto Rico because of Hurricane Maria,” by Arelis R. Hernández and Laurie McGinley, The Washington Post, May 29, 2018.
3. “Did exactly 4,645 people die in Hurricane Maria? Nope.” by Glenn Kessler, The Washington Post, June 1, 2018.
Physicists have strict rules about significant figures. Medical journals lack this professional discipline and it produces distortions that mislead readers.
Whenever you measure and report something in physics, the precision of the measurement is reflected in how the value is written. Writing a result with more digits implies that a higher precision was achieved. If that wasn’t the case, you are falsely claiming skill and accomplishment. You’ve entered the zone of post-truth.
This point was taught by my high school physics teacher, Mr. Gunnar Overgaard, may he rest in peace. Suppose we measured the length of the lab table with the meter stick. We repeated the action three times. We computed an average. Our table was 243.7 cm long. If we wrote 243.73 or 243.73333 we got a lower grade. Meter sticks only have markings of 0.1 cm. So the precision of the reported measurement should properly reflect that limitation.
Researchers in medicine seem to have skipped that lesson in physics lab. In medical journals, the default seems to be to report measurements with two decimal points, such as 16.67%, which is a gross distortion of the precision when I know that that really means 2 out of 12 patients had the finding.
This issue of precision came up recently in two papers published about the number of deaths caused by Hurricane Maria in Puerto Rico. The official death toll was 64. This number became a political hot potato when President Trump cited it as if it was evidence that he and the current local government had managed the emergency response better than George W. Bush did for Katrina.
On May 29, 2018, some researchers at the Harvard School of Public Health, a prestigious institution, published an article in The New England Journal of Medicine, a prestigious journal. You would presume that pair could report properly. The abstract said “This rate yielded a total of 4,645 excess deaths during this period (95% CI, 793 to 8,498).”1 Many newspapers published the number 4,645 in a headline. Most newspapers didn’t include all of the scientific mumbo jumbo about bias and confidence intervals.
However, the number 4,645 did not pass the sniff test at many newspapers, including the Washington Post. Their headline began “Harvard study estimates thousands died”2 and that story went on to clarify that “The Harvard study’s statistical analysis found that deaths related to the hurricane fell within a range of about 800 to more than 8,000.” That is one significant digit. Then the fact checkers went to work on it. They didn’t issue a Pinocchio score, but under a headline of “Did exactly 4,645 people die in Hurricane Maria? Nope”3 the fact checkers concluded that “it’s an egregious example of false precision to cite the ‘4,645’ number without explaining how fuzzy the number really is.”
The situation was compounded 3 days later when another news report had the Puerto Rico Department of Public Health putting the death toll at 1,397. Many assumptions go into determining what an excess death is. If the false precision makes it appear the scientists have a political agenda, it casts shade on whether the assumptions they made are objective and unbiased.
The result on social media was predictable. Outrage was expressed, as always. Lawsuits have been filed. The reputations of all scientists have been impugned. The implication is that, depending on your political polarization, you can choose the number 64, 1,000, 1,400, or 4,645 and any number is just as true as another. Worse, instead of focusing on the severity of the catastrophe and how we might have responded better then and better now and with better planning for the future, the debate has focused on alternative facts and fake scientific news. Thanks, Harvard.
So in the spirit of thinking globally but acting locally, what can I do? I love my editor. I have hinted before about how much easier it is to read, as well as more accurate scientifically, to round the numbers that we report. We've done it a few times recently, but now that the Washington Post has done it on a major news story, should this practice become the norm for journalism? If medical journal editors won't handle precision honestly, other journalists must step up. I'm distressed when I review an article that says 14.6% agreed and 79.2% strongly agreed and I know those percentages with 3 digits really mean 7/48 and 38/48, so they should be rounded to two significant figures. And isn’t it easier to read and comprehend if reporting that three treatment groups had positive findings of 4.25%, 12.08%, and 9.84% when rounded to 4%, 12%, and 10%?
Scientists using this false precision (and peer reviewers who allow it) need to be corrected. They are trying to sell their research as a Louis Vuitton handbag when we all know it is only a cheap knockoff.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com
References
1. N Eng J Med. 2018 May 29. doi: 10.1056/NEJMsa1803972
2. “Harvard study estimates thousands died in Puerto Rico because of Hurricane Maria,” by Arelis R. Hernández and Laurie McGinley, The Washington Post, May 29, 2018.
3. “Did exactly 4,645 people die in Hurricane Maria? Nope.” by Glenn Kessler, The Washington Post, June 1, 2018.
Learning from the 2017 Oscar fiasco
It was a “never event.” At the very end of the 2017 Academy Awards presentation, the winner for Best Picture was announced. It was wrong. Two and a half minutes later it was corrected. The true winner was “Moonlight,” not “La La Land.” But by then much damage had been done.
I watched it happen live on TV and reviewed it again on YouTube. Several news agencies investigated and reported on what happened. I don’t have any inside information beyond that, but my engineering perspective can illuminate how to reduce mistakes.
The first lesson is how quickly people seek to assign blame after something goes wrong. I saw various online news agencies say Warren Beatty had announced the wrong winner. While he opened the envelope, it was Faye Dunaway who actually made the announcement of “La La Land.” Furthermore, Warren and Faye were merely reading the card. Warren had been given the wrong envelope, as high resolution photographs prove. The envelope was a duplicate for the prize announced just before them for the Best Actress award. The card said Emma Stone and in a smaller font “La La Land,” the film in which she starred. Warren hesitated because of how this was written on the card. Faye thought he was trying to pause as a shtick to increase suspense so she glanced at the card and blurted out “La La Land.”
Experts in quality improvement have learned that the best way to reduce errors is to resist this tendency to assign blame. A better approach is to assume, absent evidence to the contrary, that everyone is acting responsibly and sincerely to help the patient. Hear both sides of the story before jumping to any conclusions. Find systemic factors that contributed to a human error. Then focus on ameliorating systemic weaknesses.
One contributing factor for the error at the Oscars was that there were two copies of the set of award envelopes, with one set available on each side of the stage. This way the presenters can enter from either side of the stage. They are handed an envelope by one of the two auditors from PricewaterhouseCoopers, who are the only ones who know the contents.
A key component of safety is having check backs. The envelopes have the name of the award on the outside. One might hope the presenter would double check that they are being given the correct envelope by the auditor. But backstage is a very nervous and hectic place for the presenters. Actors are not professionals dedicated to safety.
Medical care is different. Before giving a transfusion, one nurse reads the number on the bag of blood to another nurse, who confirms that it matches a paper form. That simple act can prevent mistakes. Perhaps the auditor handing the envelope to the Oscar presenter should ask the presenter, who knows which award s/he is scheduled to announce, to read out loud the award title on the front of the envelope.
Clearly, Warren Beatty was confused by the contents of the envelope. He was expecting a card to have the name of a film, not the name of an actress with the film’s name in small print below it. He didn’t know what action to take and hesitated. Faye Dunaway plunged forward and misinterpreted the card. A key component of quality is making it safe for anyone, if they are not confident in what is happening, to stop the proceeding, ask questions, and challenge plans. For example, there are time-outs prior to surgery. A second component is presenting information in a form less likely to be misinterpreted. Medicine has a problem with many sound-alike and look-alike drug names, so sometimes these words are spelled with particular letters capitalized, to distinguish them. I wish EHRs would present lab results in large, bold font.
Another contributing factor here was that Faye misinterpreted Warren’s behaviors as a joke. Major airlines utilize the “sterile cockpit.” During the few minutes that they are running through the preflight checklist, the pilot and copilot do not discuss last night’s football game, crack jokes, or engage in any other extraneous conversations. They avoid interruptions and distractions, focusing solely on the task. Sign outs in medicine need to adopt this habit.
There is a concern that one of the auditors tweeted a picture of Emma Stone backstage holding her Oscar at the same time the fiasco was happening on stage. In the modern world, cell phones and selfies are a key source of distraction, errors, and car accidents.
Per the Army, “Prior planning prevents poor performance.” A couple days before the Oscar fiasco, the auditors were interviewed and they revealed that they didn’t have an action plan to deal with the situation of a mistaken announcement. They figured it was extremely unlikely and that the circumstances would determine the best response.
Experience has shown that in the hours leading up to a pediatric code, there may be several opportunities to recognize the risk and intervene so that blame cannot be assigned to a single person or action. Mock codes prepare people to think on their feet. And it is important to have a clearly designated person in charge of a code. Leadership matters.
In the Oscar fiasco, the damage was quickly limited by the gracious words of a “La La Land” producer He assessed the situation, announced the mistake, beckoned the “Moonlight” cast and crew to the stage, graciously complimented them, showed the correct award envelope and card to the camera, and offered the statue to the correct producer. Then he hastened his team off the stage. These actions of responsibility, truthfulness, transparency, and grace staunched the bleeding, minimized the damage, and as best as possible, remediated the error. Movie producers are experts at dealing with crises and catastrophes. Medical staff, when revealing errors to patients, can learn from this role model.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com.
It was a “never event.” At the very end of the 2017 Academy Awards presentation, the winner for Best Picture was announced. It was wrong. Two and a half minutes later it was corrected. The true winner was “Moonlight,” not “La La Land.” But by then much damage had been done.
I watched it happen live on TV and reviewed it again on YouTube. Several news agencies investigated and reported on what happened. I don’t have any inside information beyond that, but my engineering perspective can illuminate how to reduce mistakes.
The first lesson is how quickly people seek to assign blame after something goes wrong. I saw various online news agencies say Warren Beatty had announced the wrong winner. While he opened the envelope, it was Faye Dunaway who actually made the announcement of “La La Land.” Furthermore, Warren and Faye were merely reading the card. Warren had been given the wrong envelope, as high resolution photographs prove. The envelope was a duplicate for the prize announced just before them for the Best Actress award. The card said Emma Stone and in a smaller font “La La Land,” the film in which she starred. Warren hesitated because of how this was written on the card. Faye thought he was trying to pause as a shtick to increase suspense so she glanced at the card and blurted out “La La Land.”
Experts in quality improvement have learned that the best way to reduce errors is to resist this tendency to assign blame. A better approach is to assume, absent evidence to the contrary, that everyone is acting responsibly and sincerely to help the patient. Hear both sides of the story before jumping to any conclusions. Find systemic factors that contributed to a human error. Then focus on ameliorating systemic weaknesses.
One contributing factor for the error at the Oscars was that there were two copies of the set of award envelopes, with one set available on each side of the stage. This way the presenters can enter from either side of the stage. They are handed an envelope by one of the two auditors from PricewaterhouseCoopers, who are the only ones who know the contents.
A key component of safety is having check backs. The envelopes have the name of the award on the outside. One might hope the presenter would double check that they are being given the correct envelope by the auditor. But backstage is a very nervous and hectic place for the presenters. Actors are not professionals dedicated to safety.
Medical care is different. Before giving a transfusion, one nurse reads the number on the bag of blood to another nurse, who confirms that it matches a paper form. That simple act can prevent mistakes. Perhaps the auditor handing the envelope to the Oscar presenter should ask the presenter, who knows which award s/he is scheduled to announce, to read out loud the award title on the front of the envelope.
Clearly, Warren Beatty was confused by the contents of the envelope. He was expecting a card to have the name of a film, not the name of an actress with the film’s name in small print below it. He didn’t know what action to take and hesitated. Faye Dunaway plunged forward and misinterpreted the card. A key component of quality is making it safe for anyone, if they are not confident in what is happening, to stop the proceeding, ask questions, and challenge plans. For example, there are time-outs prior to surgery. A second component is presenting information in a form less likely to be misinterpreted. Medicine has a problem with many sound-alike and look-alike drug names, so sometimes these words are spelled with particular letters capitalized, to distinguish them. I wish EHRs would present lab results in large, bold font.
Another contributing factor here was that Faye misinterpreted Warren’s behaviors as a joke. Major airlines utilize the “sterile cockpit.” During the few minutes that they are running through the preflight checklist, the pilot and copilot do not discuss last night’s football game, crack jokes, or engage in any other extraneous conversations. They avoid interruptions and distractions, focusing solely on the task. Sign outs in medicine need to adopt this habit.
There is a concern that one of the auditors tweeted a picture of Emma Stone backstage holding her Oscar at the same time the fiasco was happening on stage. In the modern world, cell phones and selfies are a key source of distraction, errors, and car accidents.
Per the Army, “Prior planning prevents poor performance.” A couple days before the Oscar fiasco, the auditors were interviewed and they revealed that they didn’t have an action plan to deal with the situation of a mistaken announcement. They figured it was extremely unlikely and that the circumstances would determine the best response.
Experience has shown that in the hours leading up to a pediatric code, there may be several opportunities to recognize the risk and intervene so that blame cannot be assigned to a single person or action. Mock codes prepare people to think on their feet. And it is important to have a clearly designated person in charge of a code. Leadership matters.
In the Oscar fiasco, the damage was quickly limited by the gracious words of a “La La Land” producer He assessed the situation, announced the mistake, beckoned the “Moonlight” cast and crew to the stage, graciously complimented them, showed the correct award envelope and card to the camera, and offered the statue to the correct producer. Then he hastened his team off the stage. These actions of responsibility, truthfulness, transparency, and grace staunched the bleeding, minimized the damage, and as best as possible, remediated the error. Movie producers are experts at dealing with crises and catastrophes. Medical staff, when revealing errors to patients, can learn from this role model.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com.
It was a “never event.” At the very end of the 2017 Academy Awards presentation, the winner for Best Picture was announced. It was wrong. Two and a half minutes later it was corrected. The true winner was “Moonlight,” not “La La Land.” But by then much damage had been done.
I watched it happen live on TV and reviewed it again on YouTube. Several news agencies investigated and reported on what happened. I don’t have any inside information beyond that, but my engineering perspective can illuminate how to reduce mistakes.
The first lesson is how quickly people seek to assign blame after something goes wrong. I saw various online news agencies say Warren Beatty had announced the wrong winner. While he opened the envelope, it was Faye Dunaway who actually made the announcement of “La La Land.” Furthermore, Warren and Faye were merely reading the card. Warren had been given the wrong envelope, as high resolution photographs prove. The envelope was a duplicate for the prize announced just before them for the Best Actress award. The card said Emma Stone and in a smaller font “La La Land,” the film in which she starred. Warren hesitated because of how this was written on the card. Faye thought he was trying to pause as a shtick to increase suspense so she glanced at the card and blurted out “La La Land.”
Experts in quality improvement have learned that the best way to reduce errors is to resist this tendency to assign blame. A better approach is to assume, absent evidence to the contrary, that everyone is acting responsibly and sincerely to help the patient. Hear both sides of the story before jumping to any conclusions. Find systemic factors that contributed to a human error. Then focus on ameliorating systemic weaknesses.
One contributing factor for the error at the Oscars was that there were two copies of the set of award envelopes, with one set available on each side of the stage. This way the presenters can enter from either side of the stage. They are handed an envelope by one of the two auditors from PricewaterhouseCoopers, who are the only ones who know the contents.
A key component of safety is having check backs. The envelopes have the name of the award on the outside. One might hope the presenter would double check that they are being given the correct envelope by the auditor. But backstage is a very nervous and hectic place for the presenters. Actors are not professionals dedicated to safety.
Medical care is different. Before giving a transfusion, one nurse reads the number on the bag of blood to another nurse, who confirms that it matches a paper form. That simple act can prevent mistakes. Perhaps the auditor handing the envelope to the Oscar presenter should ask the presenter, who knows which award s/he is scheduled to announce, to read out loud the award title on the front of the envelope.
Clearly, Warren Beatty was confused by the contents of the envelope. He was expecting a card to have the name of a film, not the name of an actress with the film’s name in small print below it. He didn’t know what action to take and hesitated. Faye Dunaway plunged forward and misinterpreted the card. A key component of quality is making it safe for anyone, if they are not confident in what is happening, to stop the proceeding, ask questions, and challenge plans. For example, there are time-outs prior to surgery. A second component is presenting information in a form less likely to be misinterpreted. Medicine has a problem with many sound-alike and look-alike drug names, so sometimes these words are spelled with particular letters capitalized, to distinguish them. I wish EHRs would present lab results in large, bold font.
Another contributing factor here was that Faye misinterpreted Warren’s behaviors as a joke. Major airlines utilize the “sterile cockpit.” During the few minutes that they are running through the preflight checklist, the pilot and copilot do not discuss last night’s football game, crack jokes, or engage in any other extraneous conversations. They avoid interruptions and distractions, focusing solely on the task. Sign outs in medicine need to adopt this habit.
There is a concern that one of the auditors tweeted a picture of Emma Stone backstage holding her Oscar at the same time the fiasco was happening on stage. In the modern world, cell phones and selfies are a key source of distraction, errors, and car accidents.
Per the Army, “Prior planning prevents poor performance.” A couple days before the Oscar fiasco, the auditors were interviewed and they revealed that they didn’t have an action plan to deal with the situation of a mistaken announcement. They figured it was extremely unlikely and that the circumstances would determine the best response.
Experience has shown that in the hours leading up to a pediatric code, there may be several opportunities to recognize the risk and intervene so that blame cannot be assigned to a single person or action. Mock codes prepare people to think on their feet. And it is important to have a clearly designated person in charge of a code. Leadership matters.
In the Oscar fiasco, the damage was quickly limited by the gracious words of a “La La Land” producer He assessed the situation, announced the mistake, beckoned the “Moonlight” cast and crew to the stage, graciously complimented them, showed the correct award envelope and card to the camera, and offered the statue to the correct producer. Then he hastened his team off the stage. These actions of responsibility, truthfulness, transparency, and grace staunched the bleeding, minimized the damage, and as best as possible, remediated the error. Movie producers are experts at dealing with crises and catastrophes. Medical staff, when revealing errors to patients, can learn from this role model.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com.
Toy stethoscopes
Many of my articles are inspired when I observe discordant things juxtaposed. As we move deep into winter, once again I am confronted with the issue of infection control in the office and on the ward. Hospitals have gowns, gloves, masks, and toy stethoscopes. My outpatient offices rarely used more than the sink. In urgent care clinic, each evening I would swab three or four throats for strep, with one or two turning positive. I thought nothing of it, other than being glad when gagging a patient that I wear glasses. In the hospital, I must gown, glove, and mask for a patient with strep throat. The variations in practice between hospitals (I’ve been credentialed in 30) do not make me confident in the evidence base for infection control practices. I mentioned the Red Book to a second-year resident last week. He said he had seen it on a shelf but never actually used it.
In medical school, I was taught that the most important part of a stethoscope is between the ears. I believe that statement is true, but in a similar way to how I choose wines. My palate can’t tell the difference between a $15 and a $50 bottle of wine, so buying more expensive wine is a waste. However, a $3 bottle of wine is clearly inferior, if not undrinkable. There are oenophiles (one a distant cousin in Norway) who have trained their palates to tell the difference in wines, just as there are audiophiles who support the sales of $1,000 stereo speakers. Some fraction of those snobs may have justification. So, if cardiologists have strong opinions on stethoscopes, I won’t begrudge them their choice of a more expensive model. Their tastes do not mean that the average person should spend that much on wine, speakers, or stethoscopes. I will assert that there was a time when I could tell a day or two in advance that my otoscope bulb was going to burn out. The color balance was wrong. I carried a pocket otoscope for a few years when rounding in the hospital, but never found it as accurate as my original one. Every craftsman gets accustomed to their best tools.
A professional should be aware of the minimum quality of tool needed to get the job done.
Toy isolation stethoscopes ($3 each retail in bulk) add nothing to my discernment of an infant with bronchiolitis who is distressed, so I consider that equipment a waste of money and polluting to the environment. I typically use my stethoscope and foam it on leaving the room. There is evidence that either foam or alcohol pads are effective1 in killing germs, but no proof that this hygiene makes a difference clinically.2 The myriad researchers who have published about stethoscope contamination have stopped at padding their academic portfolios with something easy to publish, which basically is a high school science project using agar plates. They then make insinuations about policy, without any cost-benefit analysis. They really haven’t been bothered enough to advance the science of clinical medicine and actually measure a clinical impact of these policies. It is a corruption of science created by the publish-or-perish environment.
One survey found that 45% of physicians disinfect their stethoscope annually or less. Laundering of white coats follows a similar pattern, which is why the British National Health Service banned lab coats for physicians 10 years ago. No ties or long sleeve shirts either. I am smug knowing that my sartorial sense was ahead of my time in this regard.
The quality-improvement work of Ignaz Semmelweis should be required reading for all physicians. The control chart3 he published on puerperal fever in Vienna in the 1840s is spectacular. Infection control is important. Modern medical science cannot produce a similar control chart to justify the amount of dollars spent annually on gowns, gloves, masks, and toy stethoscopes. Sad.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com.
References
1. Am J Infect Control. 2009 Apr;37(3):241-3.
2. J Hosp Infect. 2015 Sep;91(1):1-7.
3. https://en.wikipedia.org/wiki/Historical_mortality_rates_of_puerperal_fever
Many of my articles are inspired when I observe discordant things juxtaposed. As we move deep into winter, once again I am confronted with the issue of infection control in the office and on the ward. Hospitals have gowns, gloves, masks, and toy stethoscopes. My outpatient offices rarely used more than the sink. In urgent care clinic, each evening I would swab three or four throats for strep, with one or two turning positive. I thought nothing of it, other than being glad when gagging a patient that I wear glasses. In the hospital, I must gown, glove, and mask for a patient with strep throat. The variations in practice between hospitals (I’ve been credentialed in 30) do not make me confident in the evidence base for infection control practices. I mentioned the Red Book to a second-year resident last week. He said he had seen it on a shelf but never actually used it.
In medical school, I was taught that the most important part of a stethoscope is between the ears. I believe that statement is true, but in a similar way to how I choose wines. My palate can’t tell the difference between a $15 and a $50 bottle of wine, so buying more expensive wine is a waste. However, a $3 bottle of wine is clearly inferior, if not undrinkable. There are oenophiles (one a distant cousin in Norway) who have trained their palates to tell the difference in wines, just as there are audiophiles who support the sales of $1,000 stereo speakers. Some fraction of those snobs may have justification. So, if cardiologists have strong opinions on stethoscopes, I won’t begrudge them their choice of a more expensive model. Their tastes do not mean that the average person should spend that much on wine, speakers, or stethoscopes. I will assert that there was a time when I could tell a day or two in advance that my otoscope bulb was going to burn out. The color balance was wrong. I carried a pocket otoscope for a few years when rounding in the hospital, but never found it as accurate as my original one. Every craftsman gets accustomed to their best tools.
A professional should be aware of the minimum quality of tool needed to get the job done.
Toy isolation stethoscopes ($3 each retail in bulk) add nothing to my discernment of an infant with bronchiolitis who is distressed, so I consider that equipment a waste of money and polluting to the environment. I typically use my stethoscope and foam it on leaving the room. There is evidence that either foam or alcohol pads are effective1 in killing germs, but no proof that this hygiene makes a difference clinically.2 The myriad researchers who have published about stethoscope contamination have stopped at padding their academic portfolios with something easy to publish, which basically is a high school science project using agar plates. They then make insinuations about policy, without any cost-benefit analysis. They really haven’t been bothered enough to advance the science of clinical medicine and actually measure a clinical impact of these policies. It is a corruption of science created by the publish-or-perish environment.
One survey found that 45% of physicians disinfect their stethoscope annually or less. Laundering of white coats follows a similar pattern, which is why the British National Health Service banned lab coats for physicians 10 years ago. No ties or long sleeve shirts either. I am smug knowing that my sartorial sense was ahead of my time in this regard.
The quality-improvement work of Ignaz Semmelweis should be required reading for all physicians. The control chart3 he published on puerperal fever in Vienna in the 1840s is spectacular. Infection control is important. Modern medical science cannot produce a similar control chart to justify the amount of dollars spent annually on gowns, gloves, masks, and toy stethoscopes. Sad.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com.
References
1. Am J Infect Control. 2009 Apr;37(3):241-3.
2. J Hosp Infect. 2015 Sep;91(1):1-7.
3. https://en.wikipedia.org/wiki/Historical_mortality_rates_of_puerperal_fever
Many of my articles are inspired when I observe discordant things juxtaposed. As we move deep into winter, once again I am confronted with the issue of infection control in the office and on the ward. Hospitals have gowns, gloves, masks, and toy stethoscopes. My outpatient offices rarely used more than the sink. In urgent care clinic, each evening I would swab three or four throats for strep, with one or two turning positive. I thought nothing of it, other than being glad when gagging a patient that I wear glasses. In the hospital, I must gown, glove, and mask for a patient with strep throat. The variations in practice between hospitals (I’ve been credentialed in 30) do not make me confident in the evidence base for infection control practices. I mentioned the Red Book to a second-year resident last week. He said he had seen it on a shelf but never actually used it.
In medical school, I was taught that the most important part of a stethoscope is between the ears. I believe that statement is true, but in a similar way to how I choose wines. My palate can’t tell the difference between a $15 and a $50 bottle of wine, so buying more expensive wine is a waste. However, a $3 bottle of wine is clearly inferior, if not undrinkable. There are oenophiles (one a distant cousin in Norway) who have trained their palates to tell the difference in wines, just as there are audiophiles who support the sales of $1,000 stereo speakers. Some fraction of those snobs may have justification. So, if cardiologists have strong opinions on stethoscopes, I won’t begrudge them their choice of a more expensive model. Their tastes do not mean that the average person should spend that much on wine, speakers, or stethoscopes. I will assert that there was a time when I could tell a day or two in advance that my otoscope bulb was going to burn out. The color balance was wrong. I carried a pocket otoscope for a few years when rounding in the hospital, but never found it as accurate as my original one. Every craftsman gets accustomed to their best tools.
A professional should be aware of the minimum quality of tool needed to get the job done.
Toy isolation stethoscopes ($3 each retail in bulk) add nothing to my discernment of an infant with bronchiolitis who is distressed, so I consider that equipment a waste of money and polluting to the environment. I typically use my stethoscope and foam it on leaving the room. There is evidence that either foam or alcohol pads are effective1 in killing germs, but no proof that this hygiene makes a difference clinically.2 The myriad researchers who have published about stethoscope contamination have stopped at padding their academic portfolios with something easy to publish, which basically is a high school science project using agar plates. They then make insinuations about policy, without any cost-benefit analysis. They really haven’t been bothered enough to advance the science of clinical medicine and actually measure a clinical impact of these policies. It is a corruption of science created by the publish-or-perish environment.
One survey found that 45% of physicians disinfect their stethoscope annually or less. Laundering of white coats follows a similar pattern, which is why the British National Health Service banned lab coats for physicians 10 years ago. No ties or long sleeve shirts either. I am smug knowing that my sartorial sense was ahead of my time in this regard.
The quality-improvement work of Ignaz Semmelweis should be required reading for all physicians. The control chart3 he published on puerperal fever in Vienna in the 1840s is spectacular. Infection control is important. Modern medical science cannot produce a similar control chart to justify the amount of dollars spent annually on gowns, gloves, masks, and toy stethoscopes. Sad.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com.
References
1. Am J Infect Control. 2009 Apr;37(3):241-3.
2. J Hosp Infect. 2015 Sep;91(1):1-7.
3. https://en.wikipedia.org/wiki/Historical_mortality_rates_of_puerperal_fever
Guidelines are not cookbooks
For many years I have counseled medical students and residents that half of what I was taught in medical school has since been proven obsolete or frankly wrong. I counsel them that I have no reason to believe that I am any better than my professors were. So I wish them luck sorting out what is true. Earlier in my career, that warning was mild hyperbole, but not anymore.
Upper respiratory infections (URIs) are the most common reason for an office visit during the winter. Bronchiolitis is the most frequent diagnosis for a winter admission of an infant to a community hospital. Pediatricians have nuanced assessments and many options when treating these diseases. Best practices have changed frequently over the past 3 decades, mostly by eliminating previously espoused treatments as ineffective. In infants and young children, those obsolete treatments include decongestants and cough suppressants for young children with common colds, inhaled beta-agonists and steroids for infants with bronchiolitis, and antibiotics for simple otitis media in older children. In other words, most of what I was originally taught.
There is a discontinuity between guidelines that forbid routine steroids and beta-agonists for bronchiolitis in infants, and guidelines that strongly prescribe steroids, metered dose inhalers, and asthma action plans for all discharged wheezers over age 2 years. When I worked as a hospitalist in the pulmonology department, I frequently diagnosed asthma under age 1 year. As a general pediatric hospitalist, one winter I twice ran afoul of a hospital quality metric that benchmarked 100% compliance with providing steroids, inhaled corticosteroids, and asthma action plans on discharge for all wheezers over age 2. Fortunately for both me and the quality team working on that quality dashboard, my thorough documentation of why I didn’t think a particular wheezer had asthma was detailed enough to satisfy peer review.
Historically, medical knowledge has been dependent upon these types of observation which then are taught to the next generation of physicians and, if confirmed repeatedly, become memes with some degree of reliability. An all-too-typical Cochrane library entry may challenge these memes by looking at 200 articles, finding 20 relevant studies, selecting only 2 underpowered studies as meeting their randomized controlled trial criteria, and then concluding that there is “insufficient evidence” to prove the treatment works. But absence of proof is not proof of absence. Twenty five years after coining the phrase “evidence-based medicine,” our medical knowledge base has not been purified.
In the 17th century, French philosopher Rene Descartes concluded that too much of what he had been taught was wrong. He tried to purify his knowledge by starting over and only trusting what he could deduce with absolute certainty. His first deduction was “I think, therefore I am.”
In medicine, absolute certainty isn’t possible. Using 95% confidence intervals for a research paper does not even mean it is 95% likely to be right. So part of (which is tainted with confirmation bias.) It is a very imperfect art.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com.
For many years I have counseled medical students and residents that half of what I was taught in medical school has since been proven obsolete or frankly wrong. I counsel them that I have no reason to believe that I am any better than my professors were. So I wish them luck sorting out what is true. Earlier in my career, that warning was mild hyperbole, but not anymore.
Upper respiratory infections (URIs) are the most common reason for an office visit during the winter. Bronchiolitis is the most frequent diagnosis for a winter admission of an infant to a community hospital. Pediatricians have nuanced assessments and many options when treating these diseases. Best practices have changed frequently over the past 3 decades, mostly by eliminating previously espoused treatments as ineffective. In infants and young children, those obsolete treatments include decongestants and cough suppressants for young children with common colds, inhaled beta-agonists and steroids for infants with bronchiolitis, and antibiotics for simple otitis media in older children. In other words, most of what I was originally taught.
There is a discontinuity between guidelines that forbid routine steroids and beta-agonists for bronchiolitis in infants, and guidelines that strongly prescribe steroids, metered dose inhalers, and asthma action plans for all discharged wheezers over age 2 years. When I worked as a hospitalist in the pulmonology department, I frequently diagnosed asthma under age 1 year. As a general pediatric hospitalist, one winter I twice ran afoul of a hospital quality metric that benchmarked 100% compliance with providing steroids, inhaled corticosteroids, and asthma action plans on discharge for all wheezers over age 2. Fortunately for both me and the quality team working on that quality dashboard, my thorough documentation of why I didn’t think a particular wheezer had asthma was detailed enough to satisfy peer review.
Historically, medical knowledge has been dependent upon these types of observation which then are taught to the next generation of physicians and, if confirmed repeatedly, become memes with some degree of reliability. An all-too-typical Cochrane library entry may challenge these memes by looking at 200 articles, finding 20 relevant studies, selecting only 2 underpowered studies as meeting their randomized controlled trial criteria, and then concluding that there is “insufficient evidence” to prove the treatment works. But absence of proof is not proof of absence. Twenty five years after coining the phrase “evidence-based medicine,” our medical knowledge base has not been purified.
In the 17th century, French philosopher Rene Descartes concluded that too much of what he had been taught was wrong. He tried to purify his knowledge by starting over and only trusting what he could deduce with absolute certainty. His first deduction was “I think, therefore I am.”
In medicine, absolute certainty isn’t possible. Using 95% confidence intervals for a research paper does not even mean it is 95% likely to be right. So part of (which is tainted with confirmation bias.) It is a very imperfect art.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com.
For many years I have counseled medical students and residents that half of what I was taught in medical school has since been proven obsolete or frankly wrong. I counsel them that I have no reason to believe that I am any better than my professors were. So I wish them luck sorting out what is true. Earlier in my career, that warning was mild hyperbole, but not anymore.
Upper respiratory infections (URIs) are the most common reason for an office visit during the winter. Bronchiolitis is the most frequent diagnosis for a winter admission of an infant to a community hospital. Pediatricians have nuanced assessments and many options when treating these diseases. Best practices have changed frequently over the past 3 decades, mostly by eliminating previously espoused treatments as ineffective. In infants and young children, those obsolete treatments include decongestants and cough suppressants for young children with common colds, inhaled beta-agonists and steroids for infants with bronchiolitis, and antibiotics for simple otitis media in older children. In other words, most of what I was originally taught.
There is a discontinuity between guidelines that forbid routine steroids and beta-agonists for bronchiolitis in infants, and guidelines that strongly prescribe steroids, metered dose inhalers, and asthma action plans for all discharged wheezers over age 2 years. When I worked as a hospitalist in the pulmonology department, I frequently diagnosed asthma under age 1 year. As a general pediatric hospitalist, one winter I twice ran afoul of a hospital quality metric that benchmarked 100% compliance with providing steroids, inhaled corticosteroids, and asthma action plans on discharge for all wheezers over age 2. Fortunately for both me and the quality team working on that quality dashboard, my thorough documentation of why I didn’t think a particular wheezer had asthma was detailed enough to satisfy peer review.
Historically, medical knowledge has been dependent upon these types of observation which then are taught to the next generation of physicians and, if confirmed repeatedly, become memes with some degree of reliability. An all-too-typical Cochrane library entry may challenge these memes by looking at 200 articles, finding 20 relevant studies, selecting only 2 underpowered studies as meeting their randomized controlled trial criteria, and then concluding that there is “insufficient evidence” to prove the treatment works. But absence of proof is not proof of absence. Twenty five years after coining the phrase “evidence-based medicine,” our medical knowledge base has not been purified.
In the 17th century, French philosopher Rene Descartes concluded that too much of what he had been taught was wrong. He tried to purify his knowledge by starting over and only trusting what he could deduce with absolute certainty. His first deduction was “I think, therefore I am.”
In medicine, absolute certainty isn’t possible. Using 95% confidence intervals for a research paper does not even mean it is 95% likely to be right. So part of (which is tainted with confirmation bias.) It is a very imperfect art.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com.
The cost of experimental medicine
It has been a remarkable summer of milestones and crises for high-technology medicine.
An FDA panel has unanimously approved a gene therapy: The patient’s own immune cells are taken from his or her body, genetically modified, and reinfused to attack cancer. While the treatment has some dangers, it can be worth trying when conventional therapy has failed, and it appears to be curative when it works. Final FDA approval is expected in September.
Scientists have also announced the first use of gene therapy to effectively treat a human embryo. They successfully replaced a defective gene with an engineered correction. The in vitro embryos were not implanted.
While those breakthroughs were occurring, the parents of Charlie Gard, an infant in England with a very rare and devastating mitochondrial disease, were seeking experimental therapy for their child. The medical staff disagreed with the parents: They recommended that the best thing for Charlie would be to stop the ventilator and allow him to die, rather than let him to continue to suffer. Three British courts reviewed Charlie’s case and concurred with the medical staff; on appeal, the European Court of Human Rights also denied the parents’ wishes.
End of life cases similar to Charlie’s are not rare. In modern medicine, parents sometimes must make the heart-wrenching decision to stop aggressive therapies and accept that death is imminent and unavoidable. Many factors go into making that decision. Both the courts and medical staff presume that parents are the best decision makers. Generally, medical staff provide emotional and spiritual support to the parents, along with a tincture of time. In the vast majority of cases, parents and physicians come to agree on the course of care, but sometimes, there are irreconcilable disagreements.
It is rare for courts to overrule parents. The government typically intervenes only when the harm from a parent’s choice exceeds some threshold. For instance, it is not in a child’s best interest to be put in a car during a blizzard and driven to the store to get cigarettes. But neither is it wise to have an intrusive government reviewing every choice a parent makes. The potential harm must be large enough, likely enough, and imminent enough before most judges will intervene. The law will insist the child be in a car seat at least.
In Charlie’s case, the medical staff and the judges all explicitly said that the cost of therapy did not factor into their decision making; they looked solely at what was best for Charlie. The focus was on whether the unproven potential benefits of experimental therapy outweighed the risk of suffering caused by the therapy and continued intensive medical care.
Even when a bedside decision ignores the financial impact, money often structures which therapeutic choices are available. There are also issues of equitable access to be raised and weighed. Expenditures impact other social choices.
Money influenced the actions of Martin Shkreli, who is best known as the pharmacy company executive who markedly increased the price of a drug. Mr. Shkreli was recently convicted on three of eight charges for securities fraud, and sentencing is pending; the convictions were not related to the price increase.
Money also appears to have played a key role in the tragic deaths of more than 30 infants in northern India, who died when the hospital’s oxygen tanks went empty. The company responsible for refilling the oxygen tanks didn’t do so because, it claimed, the hospital wasn’t paying its bills. Public outrage has the government investigating the situation.
The United States has created some amazing technologies to save individual, identifiable lives, but they come at a high price that often costs lives in ways more subtle than the incident in India. At some point, the government and the public are responsible for either financing or rationing care, but that doesn’t absolve the scientists completely. The Russell-Einstein (Pugwash) Manifesto established that scientists have a moral accountability for the negative consequences of creating new technology, and that includes the financial aspects.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com
It has been a remarkable summer of milestones and crises for high-technology medicine.
An FDA panel has unanimously approved a gene therapy: The patient’s own immune cells are taken from his or her body, genetically modified, and reinfused to attack cancer. While the treatment has some dangers, it can be worth trying when conventional therapy has failed, and it appears to be curative when it works. Final FDA approval is expected in September.
Scientists have also announced the first use of gene therapy to effectively treat a human embryo. They successfully replaced a defective gene with an engineered correction. The in vitro embryos were not implanted.
While those breakthroughs were occurring, the parents of Charlie Gard, an infant in England with a very rare and devastating mitochondrial disease, were seeking experimental therapy for their child. The medical staff disagreed with the parents: They recommended that the best thing for Charlie would be to stop the ventilator and allow him to die, rather than let him to continue to suffer. Three British courts reviewed Charlie’s case and concurred with the medical staff; on appeal, the European Court of Human Rights also denied the parents’ wishes.
End of life cases similar to Charlie’s are not rare. In modern medicine, parents sometimes must make the heart-wrenching decision to stop aggressive therapies and accept that death is imminent and unavoidable. Many factors go into making that decision. Both the courts and medical staff presume that parents are the best decision makers. Generally, medical staff provide emotional and spiritual support to the parents, along with a tincture of time. In the vast majority of cases, parents and physicians come to agree on the course of care, but sometimes, there are irreconcilable disagreements.
It is rare for courts to overrule parents. The government typically intervenes only when the harm from a parent’s choice exceeds some threshold. For instance, it is not in a child’s best interest to be put in a car during a blizzard and driven to the store to get cigarettes. But neither is it wise to have an intrusive government reviewing every choice a parent makes. The potential harm must be large enough, likely enough, and imminent enough before most judges will intervene. The law will insist the child be in a car seat at least.
In Charlie’s case, the medical staff and the judges all explicitly said that the cost of therapy did not factor into their decision making; they looked solely at what was best for Charlie. The focus was on whether the unproven potential benefits of experimental therapy outweighed the risk of suffering caused by the therapy and continued intensive medical care.
Even when a bedside decision ignores the financial impact, money often structures which therapeutic choices are available. There are also issues of equitable access to be raised and weighed. Expenditures impact other social choices.
Money influenced the actions of Martin Shkreli, who is best known as the pharmacy company executive who markedly increased the price of a drug. Mr. Shkreli was recently convicted on three of eight charges for securities fraud, and sentencing is pending; the convictions were not related to the price increase.
Money also appears to have played a key role in the tragic deaths of more than 30 infants in northern India, who died when the hospital’s oxygen tanks went empty. The company responsible for refilling the oxygen tanks didn’t do so because, it claimed, the hospital wasn’t paying its bills. Public outrage has the government investigating the situation.
The United States has created some amazing technologies to save individual, identifiable lives, but they come at a high price that often costs lives in ways more subtle than the incident in India. At some point, the government and the public are responsible for either financing or rationing care, but that doesn’t absolve the scientists completely. The Russell-Einstein (Pugwash) Manifesto established that scientists have a moral accountability for the negative consequences of creating new technology, and that includes the financial aspects.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com
It has been a remarkable summer of milestones and crises for high-technology medicine.
An FDA panel has unanimously approved a gene therapy: The patient’s own immune cells are taken from his or her body, genetically modified, and reinfused to attack cancer. While the treatment has some dangers, it can be worth trying when conventional therapy has failed, and it appears to be curative when it works. Final FDA approval is expected in September.
Scientists have also announced the first use of gene therapy to effectively treat a human embryo. They successfully replaced a defective gene with an engineered correction. The in vitro embryos were not implanted.
While those breakthroughs were occurring, the parents of Charlie Gard, an infant in England with a very rare and devastating mitochondrial disease, were seeking experimental therapy for their child. The medical staff disagreed with the parents: They recommended that the best thing for Charlie would be to stop the ventilator and allow him to die, rather than let him to continue to suffer. Three British courts reviewed Charlie’s case and concurred with the medical staff; on appeal, the European Court of Human Rights also denied the parents’ wishes.
End of life cases similar to Charlie’s are not rare. In modern medicine, parents sometimes must make the heart-wrenching decision to stop aggressive therapies and accept that death is imminent and unavoidable. Many factors go into making that decision. Both the courts and medical staff presume that parents are the best decision makers. Generally, medical staff provide emotional and spiritual support to the parents, along with a tincture of time. In the vast majority of cases, parents and physicians come to agree on the course of care, but sometimes, there are irreconcilable disagreements.
It is rare for courts to overrule parents. The government typically intervenes only when the harm from a parent’s choice exceeds some threshold. For instance, it is not in a child’s best interest to be put in a car during a blizzard and driven to the store to get cigarettes. But neither is it wise to have an intrusive government reviewing every choice a parent makes. The potential harm must be large enough, likely enough, and imminent enough before most judges will intervene. The law will insist the child be in a car seat at least.
In Charlie’s case, the medical staff and the judges all explicitly said that the cost of therapy did not factor into their decision making; they looked solely at what was best for Charlie. The focus was on whether the unproven potential benefits of experimental therapy outweighed the risk of suffering caused by the therapy and continued intensive medical care.
Even when a bedside decision ignores the financial impact, money often structures which therapeutic choices are available. There are also issues of equitable access to be raised and weighed. Expenditures impact other social choices.
Money influenced the actions of Martin Shkreli, who is best known as the pharmacy company executive who markedly increased the price of a drug. Mr. Shkreli was recently convicted on three of eight charges for securities fraud, and sentencing is pending; the convictions were not related to the price increase.
Money also appears to have played a key role in the tragic deaths of more than 30 infants in northern India, who died when the hospital’s oxygen tanks went empty. The company responsible for refilling the oxygen tanks didn’t do so because, it claimed, the hospital wasn’t paying its bills. Public outrage has the government investigating the situation.
The United States has created some amazing technologies to save individual, identifiable lives, but they come at a high price that often costs lives in ways more subtle than the incident in India. At some point, the government and the public are responsible for either financing or rationing care, but that doesn’t absolve the scientists completely. The Russell-Einstein (Pugwash) Manifesto established that scientists have a moral accountability for the negative consequences of creating new technology, and that includes the financial aspects.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@frontlinemedcom.com