The largest US physician organization wrestled with the professional risks and rewards of artificial intelligence (AI) at its annual meeting, delaying action even as it adopted new policies on prior authorization and other concerns for clinicians and patients.
Physicians and medical students at the annual meeting of the American Medical Association (AMA) House of Delegates in Chicago intensely debated a report and two key resolutions on AI but could not reach consensus, pushing off decision-making until a future meeting in November.
One resolution would establish “augmented intelligence” as the preferred term for AI, reflecting the desired role of these tools in supporting — not making — physicians’ decisions. The other resolution focused on insurers’ use of AI in determining medical necessity.
(See specific policies adopted at the meeting, held June 8-12, below.)
A comprehensive AMA trustees’ report on AI considered additional issues including requirements for disclosing AI use, liability for harms due to flawed application of AI, data privacy, and cybersecurity.
The AMA intends to “continue to methodically assess these issues and make informed recommendations in proposing new policy,” said Bobby Mukkamala, MD, an otolaryngologist from Flint, Michigan, who became the AMA’s new president-elect.
AMA members at the meeting largely applauded the aim of these AI proposals, but some objected to parts of the trustees’ report.
They raised questions about what, exactly, constitutes an AI-powered service and whether all AI tools need the kind of guardrails the AMA may seek. There also were concerns about calls to make AI use more transparent.
While transparency might be an admirable goal, it might prove too hard to achieve given that AI-powered tools and products are already woven into medical practice in ways that physicians may not know or understand, said Christopher Libby, MD, MPH, a clinical informaticist and emergency physician at Cedars Sinai Medical Center in Los Angeles.
“It’s hard for the practicing clinician to know how every piece of technology works in order to describe it to the patient,” Dr. Libby said at the meeting. “How many people here can identify when algorithms are used in their EHR today?”
He suggested asking for more transparency from the companies that make and sell AI-powered software and tools to insurers and healthcare systems.
Steven H. Kroft, MD, the editor of the American Journal of Clinical Pathology, raised concerns about the unintended harm that unchecked use of AI may pose to scientific research.
He asked the AMA to address “a significant omission in an otherwise comprehensive report” — the need to protect the integrity of study results that can direct patient care.
“While sham science is not a new issue, large language models make it far easier for authors to generate fake papers and far harder for editors, reviewers, and publishers to identify them,” Dr. Kroft said. “This is a rapidly growing phenomenon that is threatening the integrity of the literature. These papers become embedded in the evidence bases that drive clinical decision-making.”
AMA has been working with specialty societies and outside AI experts to refine an effective set of recommendations. The new policies, once finalized, are intended to build on steps AMA already has taken, including last year releasing principles for AI development, deployment, and use.