AI Hallucinations
Dr. Yan broached an issue that occasionally comes up, AI hallucinations, which refer to inaccurate or misleading responses on the basis of incomplete training or intrinsic biases within the model. He pointed to the case of a doctor discussing issues related to a patient’s hands, feet, and mouth, which the AI-generated model summarized as “the patient being diagnosed with hand, foot, and mouth disease.”
Another example he provided was a request to generate a letter of medical necessity for using ustekinumab (Stelara) for treating hidradenitis suppurative in a child that included references for its effectiveness and safety in children. The AI system generated “false references that sounded like they should be real because the authors are often people who have written in that field or on that subject,” said Dr. Yan.
When pressed, the system did acknowledge the references were hypothetical but were meant to illustrate the types of studies that would typically support the use of this drug in pediatric patients with HS. “ It’s well meaning, in the sense that it’s trying to help you achieve your goals using this training system,” said Dr. Yan.
“If you’re skeptical about a response, double-check the answer with a Google search or run the response through another AI [tool] asking it to check if the response is accurate,” he added.
While AI systems won’t replace the clinician, they are continuing to improve and becoming more sophisticated. Dr. Yan advises keeping up with emerging developments and engaging and adapting the most appropriate AI tool for an individual clinician’s work.
Asked to comment on the presentation at the SPD meeting, Sheilagh Maguiness, MD, director of the Division of Pediatric Dermatology at the University of Minnesota, Minneapolis, who, like other doctors, is increasingly testing AI, said she foresees a time when AI scribes fully replace humans for completing tasks during patient interactions.
“The hope is that if the AI scribes get good enough, we can just open our phone, have them translate the interaction, and create the notes for us.”
While she likes the idea of using ChatGPT to help with tasks like letters of recommendation for medications, Dr. Yan’s comments reiterated the importance of “checking and double-checking ChatGPT because it’s not correct all the time.” She particularly welcomed the advice “that we can just go back and ask it again to clarify, and that may improve its answers.”
Dr. Yan’s disclosures included an investment portfolio that includes companies working in the AI space, including Google, Apple, Nvidia, Amazon, Microsoft, and Arm. Dr. Maguiness had no relevant disclosures.
A version of this article first appeared on Medscape.com.