AI Improves Efficiency in National Language Generation
Large language models (LLM) look at first drafts and can save time on formatting, image selection, and construction. Perhaps ChatGPT is the most famous LLM, but other tools in this category include Open AI and Bard. LLMs are trained on “the whole internet” and use publicly accessible text.
In these cases, prompts serve as input data. Output data are predictions of the first and subsequent words.
Many users appreciate the foundation LLMs provide in terms of facilitating and collating research and summarizing ideas. The LLM-generated text actually serves as a first draft, saving users time on more clerical tasks such as formatting, image selection, and structure. Notwithstanding, these tools still require human supervision to screen for hallucinations or to add specialized content.
“LLMs are a great starting place to save time but are loaded with errors,” Dr. Kerr said.
Even if the tools could produce error-free content, ethics still come into play when using AI-generated content without any alterations. Any ML/AI that has not been modified or supervised is considered plagiarism.
Yet, interestingly enough, Dr. Kerr found that patients respond more positively to AI than physicians when interacting.
“Patients felt that AI was more sensitive and compassionate because it was longer-winded and humans are short,” he said. He went on to argue that AI might actually prove useful in helping physicians to improve the quality of their patient interactions.
Dr. Kerr left the audience with these key takeaways:
- ML/AI is just one type of clinical tool with benefits and limitations. The technology conveys the advantages of freeing up the clinician’s time to focus on more human-centered tasks, improving clinical decisions in challenging situations, and improving efficiency.
- However, healthcare systems should understand that ML/AI is not 100% foolproof, as the software’s knowledge is limited to its training exposure, and proper use requires supervision.