User login
PHOENIX –
and it presents opportunities for increased production and automation of some tasks. However, it is prone to error and ‘hallucinations’ despite an authoritative tone, so its conclusions must be verified.Those were some of the messages from a talk by John Morren, MD, an associate professor of neurology at Case Western Reserve University, Cleveland, who spoke about AI at the 2023 annual meeting of the American Association for Neuromuscular and Electrodiagnostic Medicine (AANEM).
He encouraged attendees to get involved in the conversation of AI, because it is here to stay and will have a big impact on health care. “If we’re not around the table making decisions, decisions will be made for us in our absence and won’t be in our favor,” said Dr. Morren.
He started out his talk by asking if anyone in the room had used AI. After about half raised their hands, he countered that nearly everyone likely had. Voice assistants like SIRI and Alexa, social media with curated feeds, online shopping tools that provide product suggestions, and content recommendations from streaming services like Netflix all rely on AI technology.
Within medicine, AI is already playing a role in various fields, including medical imaging, disease diagnosis, drug discovery and development, predictive analytics, personalized medicine, telemedicine, and health care management.
It also has potential to be used on the job. For example, ChatGPT can generate and refine conversations towards a specific length, format, style, and level of detail. Alternatives include Bing AI from Microsoft, Bard AI from Google, Writesonic, Copy.ai, SpinBot, HIX.AI, and Chatsonic.
Specific to medicine, Consensus is a search engine that uses AI to search for, summarize, and synthesize studies from peer-reviewed literature.
Trust, but verify
Dr. Morren presented some specific use cases, including patient education and responses to patient inquiries, as well as generating letters to insurance companies appealing denial of coverage claims. He also showed an example where he asked Bing AI to explain to a patient, at a sixth- to seventh-grade reading level, the red-flag symptoms of myasthenic crisis.
AI can generate summaries of clinical evidence of previous studies. Asked by this reporter how to trust the accuracies of the summaries if the user hasn’t thoroughly read the papers, he acknowledged the imperfection of AI. “I would say that if you’re going to make a decision that you would not have made normally based on the summary that it’s giving, if you can find the fact that you’re anchoring the decision on, go into the article yourself and make sure that it’s well vetted. The AI is just good to tap you on your shoulder and say, ‘hey, just consider this.’ That’s all it is. You should always trust, but verify. If the AI is forcing you to say something new that you would not say, maybe don’t do it – or at least research it to know that it’s the truth and then you elevate yourself and get yourself to the next level.”
Limitations
The need to verify can create its own burden, according to one attendee. “I often find I end up spending more time verifying [what ChatGPT has provided]. This seems to take more time than a traditional way of going to PubMed or UpToDate or any of the other human generated consensus way,” he said.
Dr. Morren replied that he wouldn’t recommend using ChatGPT to query medical literature. Instead he recommended Consensus, which only searches the peer-reviewed medical literature.
Another key limitation is that most AI programs are date limited: For example, ChatGPT doesn’t include information after September 2021, though this may change with paid subscriptions. He also starkly warned the audience to never enter sensitive information, including patient identifiers.
There are legal and ethical considerations to AI. Dr. Morren warned against overreliance on AI, as this could undermine compassion and lead to erosion of trust, which makes it important to disclose any use of AI-generated content.
Another attendee raised concerns that AI may be generating research content, including slides for presentations, abstracts, titles, or article text. Dr. Morren said that some organizations, such as the International Committee of Medical Journal Editors, have incorporated AI in their recommendations, stating that authors should disclose any contributions of AI to their publications. However, there is little that can be done to identify AI-generated content, leaving it up to the honor code.
Asked to make predictions about how AI will evolve in the clinic over the next 2-3 years, Dr. Morren suggested that it will likely be embedded in electronic medical records. He anticipated that it will save physicians time so that they can spend more time interacting directly with patients. He quoted Eric Topol, MD, professor of medicine at Scripps Research Translational Institute, La Jolla, Calif., as saying that AI could save 20% of a physician’s time, which could be spent with patients. Dr. Morren saw it differently. “I know where that 20% of time liberated is going to go. I’m going to see 20% more patients. I’m a realist,” he said, to audience laughter.
He also predicted that AI will be found in wearables and devices, allowing health care to expand into the patient’s home in real time. “A lot of what we’re wearing is going to be an extension of the doctor’s office,” he said.
For those hoping for more guidance, Dr. Morren noted that he is the chairman of the professional practice committee of AANEM, and the group will be putting out a position statement within the next couple of months. “It will be a little bit of a blueprint for the path going forward. There are specific things that need to be done. In research, for example, you have to ensure that datasets are diverse enough. To do that we need to have inter-institutional collaboration. We have to ensure patient privacy. Consent for this needs to be a little more explicit because this is a novel area. Those are things that need to be stipulated and ratified through a task force.”
Dr. Morren has no relevant financial disclosures.
PHOENIX –
and it presents opportunities for increased production and automation of some tasks. However, it is prone to error and ‘hallucinations’ despite an authoritative tone, so its conclusions must be verified.Those were some of the messages from a talk by John Morren, MD, an associate professor of neurology at Case Western Reserve University, Cleveland, who spoke about AI at the 2023 annual meeting of the American Association for Neuromuscular and Electrodiagnostic Medicine (AANEM).
He encouraged attendees to get involved in the conversation of AI, because it is here to stay and will have a big impact on health care. “If we’re not around the table making decisions, decisions will be made for us in our absence and won’t be in our favor,” said Dr. Morren.
He started out his talk by asking if anyone in the room had used AI. After about half raised their hands, he countered that nearly everyone likely had. Voice assistants like SIRI and Alexa, social media with curated feeds, online shopping tools that provide product suggestions, and content recommendations from streaming services like Netflix all rely on AI technology.
Within medicine, AI is already playing a role in various fields, including medical imaging, disease diagnosis, drug discovery and development, predictive analytics, personalized medicine, telemedicine, and health care management.
It also has potential to be used on the job. For example, ChatGPT can generate and refine conversations towards a specific length, format, style, and level of detail. Alternatives include Bing AI from Microsoft, Bard AI from Google, Writesonic, Copy.ai, SpinBot, HIX.AI, and Chatsonic.
Specific to medicine, Consensus is a search engine that uses AI to search for, summarize, and synthesize studies from peer-reviewed literature.
Trust, but verify
Dr. Morren presented some specific use cases, including patient education and responses to patient inquiries, as well as generating letters to insurance companies appealing denial of coverage claims. He also showed an example where he asked Bing AI to explain to a patient, at a sixth- to seventh-grade reading level, the red-flag symptoms of myasthenic crisis.
AI can generate summaries of clinical evidence of previous studies. Asked by this reporter how to trust the accuracies of the summaries if the user hasn’t thoroughly read the papers, he acknowledged the imperfection of AI. “I would say that if you’re going to make a decision that you would not have made normally based on the summary that it’s giving, if you can find the fact that you’re anchoring the decision on, go into the article yourself and make sure that it’s well vetted. The AI is just good to tap you on your shoulder and say, ‘hey, just consider this.’ That’s all it is. You should always trust, but verify. If the AI is forcing you to say something new that you would not say, maybe don’t do it – or at least research it to know that it’s the truth and then you elevate yourself and get yourself to the next level.”
Limitations
The need to verify can create its own burden, according to one attendee. “I often find I end up spending more time verifying [what ChatGPT has provided]. This seems to take more time than a traditional way of going to PubMed or UpToDate or any of the other human generated consensus way,” he said.
Dr. Morren replied that he wouldn’t recommend using ChatGPT to query medical literature. Instead he recommended Consensus, which only searches the peer-reviewed medical literature.
Another key limitation is that most AI programs are date limited: For example, ChatGPT doesn’t include information after September 2021, though this may change with paid subscriptions. He also starkly warned the audience to never enter sensitive information, including patient identifiers.
There are legal and ethical considerations to AI. Dr. Morren warned against overreliance on AI, as this could undermine compassion and lead to erosion of trust, which makes it important to disclose any use of AI-generated content.
Another attendee raised concerns that AI may be generating research content, including slides for presentations, abstracts, titles, or article text. Dr. Morren said that some organizations, such as the International Committee of Medical Journal Editors, have incorporated AI in their recommendations, stating that authors should disclose any contributions of AI to their publications. However, there is little that can be done to identify AI-generated content, leaving it up to the honor code.
Asked to make predictions about how AI will evolve in the clinic over the next 2-3 years, Dr. Morren suggested that it will likely be embedded in electronic medical records. He anticipated that it will save physicians time so that they can spend more time interacting directly with patients. He quoted Eric Topol, MD, professor of medicine at Scripps Research Translational Institute, La Jolla, Calif., as saying that AI could save 20% of a physician’s time, which could be spent with patients. Dr. Morren saw it differently. “I know where that 20% of time liberated is going to go. I’m going to see 20% more patients. I’m a realist,” he said, to audience laughter.
He also predicted that AI will be found in wearables and devices, allowing health care to expand into the patient’s home in real time. “A lot of what we’re wearing is going to be an extension of the doctor’s office,” he said.
For those hoping for more guidance, Dr. Morren noted that he is the chairman of the professional practice committee of AANEM, and the group will be putting out a position statement within the next couple of months. “It will be a little bit of a blueprint for the path going forward. There are specific things that need to be done. In research, for example, you have to ensure that datasets are diverse enough. To do that we need to have inter-institutional collaboration. We have to ensure patient privacy. Consent for this needs to be a little more explicit because this is a novel area. Those are things that need to be stipulated and ratified through a task force.”
Dr. Morren has no relevant financial disclosures.
PHOENIX –
and it presents opportunities for increased production and automation of some tasks. However, it is prone to error and ‘hallucinations’ despite an authoritative tone, so its conclusions must be verified.Those were some of the messages from a talk by John Morren, MD, an associate professor of neurology at Case Western Reserve University, Cleveland, who spoke about AI at the 2023 annual meeting of the American Association for Neuromuscular and Electrodiagnostic Medicine (AANEM).
He encouraged attendees to get involved in the conversation of AI, because it is here to stay and will have a big impact on health care. “If we’re not around the table making decisions, decisions will be made for us in our absence and won’t be in our favor,” said Dr. Morren.
He started out his talk by asking if anyone in the room had used AI. After about half raised their hands, he countered that nearly everyone likely had. Voice assistants like SIRI and Alexa, social media with curated feeds, online shopping tools that provide product suggestions, and content recommendations from streaming services like Netflix all rely on AI technology.
Within medicine, AI is already playing a role in various fields, including medical imaging, disease diagnosis, drug discovery and development, predictive analytics, personalized medicine, telemedicine, and health care management.
It also has potential to be used on the job. For example, ChatGPT can generate and refine conversations towards a specific length, format, style, and level of detail. Alternatives include Bing AI from Microsoft, Bard AI from Google, Writesonic, Copy.ai, SpinBot, HIX.AI, and Chatsonic.
Specific to medicine, Consensus is a search engine that uses AI to search for, summarize, and synthesize studies from peer-reviewed literature.
Trust, but verify
Dr. Morren presented some specific use cases, including patient education and responses to patient inquiries, as well as generating letters to insurance companies appealing denial of coverage claims. He also showed an example where he asked Bing AI to explain to a patient, at a sixth- to seventh-grade reading level, the red-flag symptoms of myasthenic crisis.
AI can generate summaries of clinical evidence of previous studies. Asked by this reporter how to trust the accuracies of the summaries if the user hasn’t thoroughly read the papers, he acknowledged the imperfection of AI. “I would say that if you’re going to make a decision that you would not have made normally based on the summary that it’s giving, if you can find the fact that you’re anchoring the decision on, go into the article yourself and make sure that it’s well vetted. The AI is just good to tap you on your shoulder and say, ‘hey, just consider this.’ That’s all it is. You should always trust, but verify. If the AI is forcing you to say something new that you would not say, maybe don’t do it – or at least research it to know that it’s the truth and then you elevate yourself and get yourself to the next level.”
Limitations
The need to verify can create its own burden, according to one attendee. “I often find I end up spending more time verifying [what ChatGPT has provided]. This seems to take more time than a traditional way of going to PubMed or UpToDate or any of the other human generated consensus way,” he said.
Dr. Morren replied that he wouldn’t recommend using ChatGPT to query medical literature. Instead he recommended Consensus, which only searches the peer-reviewed medical literature.
Another key limitation is that most AI programs are date limited: For example, ChatGPT doesn’t include information after September 2021, though this may change with paid subscriptions. He also starkly warned the audience to never enter sensitive information, including patient identifiers.
There are legal and ethical considerations to AI. Dr. Morren warned against overreliance on AI, as this could undermine compassion and lead to erosion of trust, which makes it important to disclose any use of AI-generated content.
Another attendee raised concerns that AI may be generating research content, including slides for presentations, abstracts, titles, or article text. Dr. Morren said that some organizations, such as the International Committee of Medical Journal Editors, have incorporated AI in their recommendations, stating that authors should disclose any contributions of AI to their publications. However, there is little that can be done to identify AI-generated content, leaving it up to the honor code.
Asked to make predictions about how AI will evolve in the clinic over the next 2-3 years, Dr. Morren suggested that it will likely be embedded in electronic medical records. He anticipated that it will save physicians time so that they can spend more time interacting directly with patients. He quoted Eric Topol, MD, professor of medicine at Scripps Research Translational Institute, La Jolla, Calif., as saying that AI could save 20% of a physician’s time, which could be spent with patients. Dr. Morren saw it differently. “I know where that 20% of time liberated is going to go. I’m going to see 20% more patients. I’m a realist,” he said, to audience laughter.
He also predicted that AI will be found in wearables and devices, allowing health care to expand into the patient’s home in real time. “A lot of what we’re wearing is going to be an extension of the doctor’s office,” he said.
For those hoping for more guidance, Dr. Morren noted that he is the chairman of the professional practice committee of AANEM, and the group will be putting out a position statement within the next couple of months. “It will be a little bit of a blueprint for the path going forward. There are specific things that need to be done. In research, for example, you have to ensure that datasets are diverse enough. To do that we need to have inter-institutional collaboration. We have to ensure patient privacy. Consent for this needs to be a little more explicit because this is a novel area. Those are things that need to be stipulated and ratified through a task force.”
Dr. Morren has no relevant financial disclosures.
AT AANEM 2023