Transparency and ‘explainability’
The ability to explain what goes into tools is essential to maintaining trust in the health care system, Dr. Collins said.
“Part of knowing how much trust to place in the system is the transparency of those systems and the ability to audit how well the algorithm is performing,” Dr. Collins said. “The system should also regularly report to users the level of certainty with which it is providing an output rather than providing a simple binary output.”
Dr. Collins recommends that providers develop an understanding of the limits of AI regulations as well, which might including learning how the system was approved and how it is monitored.
“The FDA has oversight over some applications of AI and health care for software as a medical device, but there’s currently no dedicated process to evaluate the systems for the presence of bias,” Dr. Collins said. “The gaps in regulation leave the door open for the use of AI in clinical care that contain significant biases.”
Dr. Haidet likened AI tools to the Global Positioning System: A good GPS system will let users see alternate routes, opt out of toll roads or highways, and will highlight why routes have changed. But users need to understand how to read the map so they can tell when something seems amiss.
Dr. Collins and Dr. Haidet report no relevant financial relationships
A version of this article first appeared on Medscape.com.