Editor’s note: This article has been provided by The Doctors Company, the exclusively endorsed medical malpractice carrier for the Society of Hospital Medicine.
Artificial intelligence (AI) has proven of value in the COVID-19 pandemic and shows promise for mitigating future health care crises. During the pandemic’s first wave in New York, for example, Mount Sinai Health System used an algorithm to help identify patients ready for discharge. Such systems can help overburdened hospitals manage personnel and the flow of supplies in a medical crisis so they can continue to provide superior patient care.1
Pandemic applications have demonstrated AI’s potential not only to lift administrative burdens, but also to give physicians back what Eric Topol, MD, founder and director of Scripps Research Translational Institute and author of Deep Medicine, calls “the gift of time.”2 More time with patients contributes to clear communication and positive relationships, which lower the odds of medical errors, enhance patient safety, and potentially reduce physicians’ risks of certain types of litigation.3
However, physicians and health systems will need to approach AI with caution. Many unknowns remain – including potential liability risks and the potential for worsening preexisting bias. The law will need to evolve to account for AI-related liability scenarios, some of which are yet to be imagined.
Like any emerging technology, AI brings risk, but its promise of benefit should outweigh the probability of negative consequences – provided we remain aware of and mitigate the potential for AI-induced adverse events.
AI’s pandemic success limited due to fragmented data
Innovation is the key to success in any crisis, and many health care providers have shown their ability to innovate with AI during the pandemic. For example, researchers at the University of California, San Diego, health system who were designing an AI program to help doctors spot pneumonia on a chest x-ray retooled their application to assist physicians fighting coronavirus.4
Meanwhile, AI has been used to distinguish COVID-19–specific symptoms: It was a computer sifting medical records that took anosmia, loss of the sense of smell, from an anecdotal connection to an officially recognized early symptom of the virus.5 This information now helps physicians distinguish COVID-19 from influenza.
However, holding back more innovation is the fragmentation of health care data in the United States. Most AI applications for medicine rely on machine learning; that is, they train on historical patient data to recognize patterns. Therefore, “Everything that we’re doing gets better with a lot more annotated datasets,” Dr. Topol says. Unfortunately, because of our disparate systems, we don’t have centralized data.6 And even if our data were centralized, researchers lack enough reliable COVID-19 data to perfect algorithms in the short term.
Or, put in bleaker terms by the Washington Post: “One of the biggest challenges has been that much data remains siloed inside incompatible computer systems, hoarded by business interests and tangled in geopolitics.”7
The good news is that machine learning and data science platform Kaggle is hosting the COVID-19 Open Research Dataset, or CORD-19, which contains well over 100,000 scholarly articles on COVID-19, SARS, and other relevant infections.8 In lieu of a true central repository of anonymized health data, such large datasets can help train new AI applications in search of new diagnostic tools and therapies.