Article Type
Changed
Tue, 10/08/2019 - 14:53

Understanding limitations of technology is key

 

Artificial intelligence (AI) and machine learning (ML) are promoted as the solution to many health care problems, but the area risks becoming technology led – with only secondary consideration to the safe clinical application of the technology, says Robert Challen, PhD.

Dr. Challen, of the University of Exeter (England), is the lead author of a recent paper that examines the short-, medium-, and long-term issues with medical applications of AI. “In the short term, AI systems will effectively function like laboratory screening tests, identifying patients who are at higher risk than others of disease, or who could benefit more from a particular treatment,” Dr. Challen said. “We usually accept that laboratory tests are useful to help make a diagnosis; however, clinicians are aware that they might not always be accurate and interpret their output in the clinical context. AI systems are no different in that they will be a useful tool so long as they are designed with safety in mind and used with a pragmatic attitude to their interpretation.”

The paper also suggests a set of short-and medium-term clinical safety issues that need addressing when bringing these systems from laboratory to bedside.

In the longer term, as more continuously learning and autonomous systems are developed, the safety risks will need to be continuously reevaluated, he added. “Any new technology comes with limitations and understanding those limitations is key to safe use of that technology. In the same way a new screening test has limitations on its sensitivity and specificity that define how it can be used, AL and ML systems have limitations on accuracy and which patients they can be used on,” Dr. Challen said. If hospitalists understand these limitations, they can participate better in their development.

Dr. Challen recommends that hospitalists help the development of AI tools by participating in studies that assess AI applications in the clinical environment. “Try to make sure that where AI research is taking place, there is strong clinical involvement.”

Reference

1. Challen R et al. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. 2019 Jan 12. doi: 10.1136/bmjqs-2018-008370.

Publications
Topics
Sections

Understanding limitations of technology is key

Understanding limitations of technology is key

 

Artificial intelligence (AI) and machine learning (ML) are promoted as the solution to many health care problems, but the area risks becoming technology led – with only secondary consideration to the safe clinical application of the technology, says Robert Challen, PhD.

Dr. Challen, of the University of Exeter (England), is the lead author of a recent paper that examines the short-, medium-, and long-term issues with medical applications of AI. “In the short term, AI systems will effectively function like laboratory screening tests, identifying patients who are at higher risk than others of disease, or who could benefit more from a particular treatment,” Dr. Challen said. “We usually accept that laboratory tests are useful to help make a diagnosis; however, clinicians are aware that they might not always be accurate and interpret their output in the clinical context. AI systems are no different in that they will be a useful tool so long as they are designed with safety in mind and used with a pragmatic attitude to their interpretation.”

The paper also suggests a set of short-and medium-term clinical safety issues that need addressing when bringing these systems from laboratory to bedside.

In the longer term, as more continuously learning and autonomous systems are developed, the safety risks will need to be continuously reevaluated, he added. “Any new technology comes with limitations and understanding those limitations is key to safe use of that technology. In the same way a new screening test has limitations on its sensitivity and specificity that define how it can be used, AL and ML systems have limitations on accuracy and which patients they can be used on,” Dr. Challen said. If hospitalists understand these limitations, they can participate better in their development.

Dr. Challen recommends that hospitalists help the development of AI tools by participating in studies that assess AI applications in the clinical environment. “Try to make sure that where AI research is taking place, there is strong clinical involvement.”

Reference

1. Challen R et al. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. 2019 Jan 12. doi: 10.1136/bmjqs-2018-008370.

 

Artificial intelligence (AI) and machine learning (ML) are promoted as the solution to many health care problems, but the area risks becoming technology led – with only secondary consideration to the safe clinical application of the technology, says Robert Challen, PhD.

Dr. Challen, of the University of Exeter (England), is the lead author of a recent paper that examines the short-, medium-, and long-term issues with medical applications of AI. “In the short term, AI systems will effectively function like laboratory screening tests, identifying patients who are at higher risk than others of disease, or who could benefit more from a particular treatment,” Dr. Challen said. “We usually accept that laboratory tests are useful to help make a diagnosis; however, clinicians are aware that they might not always be accurate and interpret their output in the clinical context. AI systems are no different in that they will be a useful tool so long as they are designed with safety in mind and used with a pragmatic attitude to their interpretation.”

The paper also suggests a set of short-and medium-term clinical safety issues that need addressing when bringing these systems from laboratory to bedside.

In the longer term, as more continuously learning and autonomous systems are developed, the safety risks will need to be continuously reevaluated, he added. “Any new technology comes with limitations and understanding those limitations is key to safe use of that technology. In the same way a new screening test has limitations on its sensitivity and specificity that define how it can be used, AL and ML systems have limitations on accuracy and which patients they can be used on,” Dr. Challen said. If hospitalists understand these limitations, they can participate better in their development.

Dr. Challen recommends that hospitalists help the development of AI tools by participating in studies that assess AI applications in the clinical environment. “Try to make sure that where AI research is taking place, there is strong clinical involvement.”

Reference

1. Challen R et al. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. 2019 Jan 12. doi: 10.1136/bmjqs-2018-008370.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.