Latest News

AI in Medicine: Are Large Language Models Ready for the Exam Room?


 

Too Little Evaluation

For any improvement strategy to work, LLMs — and all AI-assisted healthcare tools — first need a better evaluation framework. So far, LLMs have “been used in really exciting ways but not really well-vetted ways,” Tamir said.

While some AI-assisted tools, particularly in medical imaging, have undergone rigorous FDA evaluations and earned approval, most haven’t. And because the FDA only regulates algorithms that are considered medical devices, Parikh said that most LLMs used for administrative tasks and efficiency don’t fall under the regulatory agency’s purview.

But these algorithms still have access to patient information and can directly influence patient and doctor decisions. Third-party regulatory agencies are expected to emerge, but it’s still unclear who those will be. Before developers can build a safer and more efficient LLM for healthcare, they’ll need better guidelines and guardrails. “Unless we figure out evaluation, how would we know whether the healthcare-appropriate large language models are better or worse?” Shah asked.

A version of this article appeared on Medscape.com.

Pages

Recommended Reading

Is It Possible To Treat Patients You Dislike?
MDedge Family Medicine
Hospital Diagnostic Errors May Affect 7% of Patients
MDedge Family Medicine
A Doctor Gets the Save When a Little League Umpire Collapses
MDedge Family Medicine
The Game We Play Every Day
MDedge Family Medicine
Industry Payments to Peer Reviewers Scrutinized at Four Major Medical Journals
MDedge Family Medicine
Six Tips for Media Interviews
MDedge Family Medicine
ICD-10-CM Codes for CCCA, FFA Now Available
MDedge Family Medicine
Cardiovascular Disease 2050: No, GLP-1s Won’t Save the Day
MDedge Family Medicine
Cybersecurity Concerns Continue to Rise With Ransom, Data Manipulation, AI Risks
MDedge Family Medicine
Family Medicine–Led Obstetric Units Achieve Lower C-Section Rates, Better Safety Culture
MDedge Family Medicine