AI introduces new questions around liability
While AI may eventually be assigned legal personhood, it is not, in fact, a person: It is a tool wielded by individual clinicians, by teams, by health systems, even multiple systems collaborating. Our current liability laws are not ready for the era of digital medicine.
AI algorithms are not perfect. Because we know that diagnostic error is already a major allegation in malpractice claims, we must ask: What happens when a patient alleges that diagnostic error occurred because a physician or physicians leaned too heavily on AI?
In the United States, testing delays have threatened the safety of patients, physicians, and the public by delaying diagnosis of COVID-19. But again, health care providers have applied real innovation – generating novel and useful ideas and applying those ideas – to this problem. For example, researchers at Mount Sinai became the first in the country to combine AI with imaging and clinical data to produce an algorithm that can detect COVID-19 based on computed tomography scans of the chest, in combination with patient information and exposure history.9
AI in health care can help mitigate bias – or worsen it
Machine learning is only as good as the information provided to train the machine. Models trained on partial datasets can skew toward demographics that turned up more often in the data – for example, White race or men over 60. There is concern that “analyses based on faulty or biased algorithms could exacerbate existing racial gaps and other disparities in health care.”10 Already during the pandemic’s first waves, multiple AI systems used to classify x-rays have been found to show racial, gender, and socioeconomic biases.11
Such bias could create high potential for poor recommendations, including false positives and false negatives. It’s critical that system builders are able to explain and qualify their training data and that those who best understand AI-related system risks are the ones who influence health care systems or alter applications to mitigate AI-related harms.12
AI can help spot the next outbreak
More than a week before the World Health Organization released its first warning about a novel coronavirus, the AI platform BlueDot, created in Toronto, spotted an unusual cluster of pneumonia cases in Wuhan, China. Meanwhile, at Boston Children’s Hospital, the AI application Healthmap was scanning social media and news sites for signs of disease cluster, and it, too, flagged the first signs of what would become the COVID-19 outbreak – days before the WHO’s first formal alert.13
These innovative applications of AI in health care demonstrate real promise in detecting future outbreaks of new viruses early. This will allow health care providers and public health officials to get information out sooner, reducing the load on health systems, and ultimately, saving lives.
Dr. Anderson is chairman and chief executive officer, The Doctors Company and TDC Group.
References
1. Gold A. “Coronavirus tests the value of artificial intelligence in medicine” Fierce Biotech. 2020 May 22.
2. Topol E. “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again” (New York: Hachette Book Group; 2019:285).
3. The Doctors Company. “The Algorithm Will See You Now: How AI’s Healthcare Potential Outweighs Its Risk” 2020 Jan.
4. Gold A. Coronavirus tests the value of artificial intelligence in medicine. Fierce Biotech. 2020 May 22.
5. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
6. Reuter E. Hundreds of AI solutions proposed for pandemic, but few are proven. MedCity News. 2020 May 28.
7. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
8. Lee K. COVID-19 will accelerate the AI health care revolution. Wired. 2020 May 22.
9. Mei X et al. Artificial intelligence–enabled rapid diagnosis of patients with COVID-19. Nat Med. 2020 May 19;26:1224-8. doi: 10.1038/s41591-020-0931-3.
10. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
11. Wiggers K. Researchers find evidence of racial, gender, and socioeconomic bias in chest X-ray classifiers. The Machine: Making Sense of AI. 2020 Oct 21.
12. The Doctors Company. “The Algorithm Will See You Now: How AI’s Healthcare Potential Outweighs Its Risk” 2020 Jan.
13. Sewalk K. Innovative disease surveillance platforms detected early warning signs for novel coronavirus outbreak (nCoV-2019). The Disease Daily. 2020 Jan 31.