Article Type
Changed
Fri, 03/22/2024 - 13:07

SAN DIEGO — Just a day before the annual meeting of the American Academy of Dermatology (AAD) began, a study was published online in JAMA Dermatology, cautioning that most downloadable mobile apps driven by artificial intelligence (AI) for use in monitoring dermatologic conditions lack validation.

Not least of the problems among the 41 apps evaluated, the majority offered no supporting evidence, no information about whether the app performance had been validated, and no information about how user privacy would be managed, reported Shannon Wongvibulsin, MD, PhD, a resident in the dermatology program at the University of California, Los Angeles, and her coauthors.

The findings from this report were also summarized in a poster at the AAD meeting, and the major themes were reiterated in several AAD symposia devoted to AI at the meeting. Veronica Rotemberg, MD, PhD, a dermatologist at Memorial Sloan Kettering Cancer Center, New York City, was one of those who weighed in on the future of AI. Although she was the senior author of the report, she did not address the report or poster directly, but her presentation on the practical aspects of incorporating AI into dermatology practice revisited several of its themes. 

Of the different themes, perhaps the most important were the concept that the source of AI data matters and the point that practicing clinicians should be familiar with the data source.

To date, “there is not much transparency in what data AI models are using,” Dr. Rotemberg said at the meeting. Based on the expectation that dermatologists will be purchasing rather than developing their own AI-based systems, she reiterated more than once that “transparency of data is critical,” even if vendors are often reluctant to reveal how their proprietary systems have been developed.

Few Dermatology Apps Are Vetted for Accuracy

In the poster and in the more detailed JAMA Dermatology paper, Dr. Wongvibulsin and her coinvestigators evaluated direct-to-consumer downloadable apps that claim to help with the assessment and management of skin conditions. Very few provided any supporting evidence of accuracy or even information about how they functioned.

The 41 apps were drawn from more than 300 apps; the others were excluded for failing to meet such criteria as failing to employ AI, not being available in English, or not addressing clinical management of dermatologic diseases. Dr. Wongvibulsin pointed out that none of the apps had been granted regulatory approval even though only two provided a disclaimer to that effect.

In all, just 5 of the 41 provided supporting evidence from a peer-reviewed journal, and less than 40% were created with any input from a dermatologist, Dr. Wongvibulsin reported. The result is that the utility and accuracy of these apps were, for the most part, difficult to judge.

“At a minimum, app developers should provide details on what AI algorithms are used, what data sets were used for training, testing, and validation, whether there was any clinician input, whether there are any supporting publications, how user-submitted images are used, and if there are any measures used to ensure data privacy,” Dr. Wongvibulsin wrote in the poster.

For AI-based apps or systems designed for use by dermatologists, Dr. Rotemberg made similar assertions in her overview of what clinicians should be considering for proprietary AI systems, whether to help with diagnosis or improve office efficiency.
 

 

 

Only One Dermatology App Cleared By the FDA

Currently, the only FDA-cleared app for dermatologic use is the DermaSensor, an AI-driven device. It was cleared for use in January 2024 for the evaluation of skin lesions “suggestive” of melanomabasal cell carcinoma, and/or squamous cell carcinoma in patients aged ≥ 40 years “to assist health care providers in determining whether to refer a patient to a dermatologist,” according to an FDA announcement.

Using elastic scattering spectroscopy to analyze light reflecting off the skin to detect malignancy, the manufacturer’s promotional material claims a 96% sensitivity and a 97% specificity. 

While Dr. Rotemberg did not comment on these claims, she cautioned that AI models differ with regards to how they were trained and the relative heterogeneity of the training dataset defined by types of patients, types of skin, and types of AI learning processes. All of these variables are relevant in whether the AI will perform in a given clinical setting at the level it performed during development.

“The most accurate models employ narrow datasets, but these do not necessarily mimic what we see in practice,” she said.

In addition, even when an AI-based system is working for a given task, it must be monitored over time. Dr. Rotemberg warned about the potential for “data drift,” which describes the slow evolution in how diseases present, their prevalence by age, or other factors that might affect AI performance. She explained that repeated validation is needed to ensure that the AI-based models remain as accurate over time as they were when first used.

Many of these concepts were explored in a consensus statement from the International Skin Imaging Collaboration AI Working Group, published in JAMA Dermatology in December 2021. The statement, of which Dr. Rotemberg was a coauthor, provided recommendations for the principles of AI algorithm development specific to dermatologic considerations.

At the AAD symposium, Dr. Rotemberg asked the audience for suggestions about the needs they hoped AI might address for in office care or efficiency. Their responses included generating prior authorizations for prescriptions, triaging email for importance, and helping to improve efficiency for common front desk tasks. She liked all of these suggestions, but she warned that as powerful as it can be, AI is not likely to provide technology that will fit seamlessly into workflows without adjustment.

“Our current systems do not allow human integration of AI models,” Dr. Rotemberg said. Rather than counting on AI to adapt to current practices, she cautioned that “we may have to redesign our entire structure to actually be able to accommodate AI-based” systems. The risk for users is tasks that become more challenging before they become easier. 


AI Should Not Be a Black Box

AI is promising, but it is not magic, according to other investigators, including Tofunmi A. Omiye, PhD, a postdoctoral scholar in dermatology at Stanford University, California. First author of a recent review of AI in dermatology published in Frontiers in Medicine, Dr. Omiye agreed that clinicians who want to employ AI should be able to understand basic principles if they want the technology to perform as expected.

“I totally agree that physicians should at least have a basic understanding of the data sources for training AI models as we have found that to be important to the performance of these models in the clinical setting,” he told this news organization.

“Beyond understanding the data sources, I believe physicians can also try to have a comprehensive understanding of what AI means, its training process, and evaluation as this will help them to evaluate its utility in their practice,” he added. He also reinforced the relevance of data drift.

“Concepts like distribution shift — where models perform less well over time due to changes in the patient population — are also important to keep in mind,” Dr. Omiye said.

Dr. Wongvibulsin, Dr. Rotemberg, and Dr. Omiye reported no potential financial conflicts of interest relevant to this topic. 

A version of this article appeared on Medscape.com .

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

SAN DIEGO — Just a day before the annual meeting of the American Academy of Dermatology (AAD) began, a study was published online in JAMA Dermatology, cautioning that most downloadable mobile apps driven by artificial intelligence (AI) for use in monitoring dermatologic conditions lack validation.

Not least of the problems among the 41 apps evaluated, the majority offered no supporting evidence, no information about whether the app performance had been validated, and no information about how user privacy would be managed, reported Shannon Wongvibulsin, MD, PhD, a resident in the dermatology program at the University of California, Los Angeles, and her coauthors.

The findings from this report were also summarized in a poster at the AAD meeting, and the major themes were reiterated in several AAD symposia devoted to AI at the meeting. Veronica Rotemberg, MD, PhD, a dermatologist at Memorial Sloan Kettering Cancer Center, New York City, was one of those who weighed in on the future of AI. Although she was the senior author of the report, she did not address the report or poster directly, but her presentation on the practical aspects of incorporating AI into dermatology practice revisited several of its themes. 

Of the different themes, perhaps the most important were the concept that the source of AI data matters and the point that practicing clinicians should be familiar with the data source.

To date, “there is not much transparency in what data AI models are using,” Dr. Rotemberg said at the meeting. Based on the expectation that dermatologists will be purchasing rather than developing their own AI-based systems, she reiterated more than once that “transparency of data is critical,” even if vendors are often reluctant to reveal how their proprietary systems have been developed.

Few Dermatology Apps Are Vetted for Accuracy

In the poster and in the more detailed JAMA Dermatology paper, Dr. Wongvibulsin and her coinvestigators evaluated direct-to-consumer downloadable apps that claim to help with the assessment and management of skin conditions. Very few provided any supporting evidence of accuracy or even information about how they functioned.

The 41 apps were drawn from more than 300 apps; the others were excluded for failing to meet such criteria as failing to employ AI, not being available in English, or not addressing clinical management of dermatologic diseases. Dr. Wongvibulsin pointed out that none of the apps had been granted regulatory approval even though only two provided a disclaimer to that effect.

In all, just 5 of the 41 provided supporting evidence from a peer-reviewed journal, and less than 40% were created with any input from a dermatologist, Dr. Wongvibulsin reported. The result is that the utility and accuracy of these apps were, for the most part, difficult to judge.

“At a minimum, app developers should provide details on what AI algorithms are used, what data sets were used for training, testing, and validation, whether there was any clinician input, whether there are any supporting publications, how user-submitted images are used, and if there are any measures used to ensure data privacy,” Dr. Wongvibulsin wrote in the poster.

For AI-based apps or systems designed for use by dermatologists, Dr. Rotemberg made similar assertions in her overview of what clinicians should be considering for proprietary AI systems, whether to help with diagnosis or improve office efficiency.
 

 

 

Only One Dermatology App Cleared By the FDA

Currently, the only FDA-cleared app for dermatologic use is the DermaSensor, an AI-driven device. It was cleared for use in January 2024 for the evaluation of skin lesions “suggestive” of melanomabasal cell carcinoma, and/or squamous cell carcinoma in patients aged ≥ 40 years “to assist health care providers in determining whether to refer a patient to a dermatologist,” according to an FDA announcement.

Using elastic scattering spectroscopy to analyze light reflecting off the skin to detect malignancy, the manufacturer’s promotional material claims a 96% sensitivity and a 97% specificity. 

While Dr. Rotemberg did not comment on these claims, she cautioned that AI models differ with regards to how they were trained and the relative heterogeneity of the training dataset defined by types of patients, types of skin, and types of AI learning processes. All of these variables are relevant in whether the AI will perform in a given clinical setting at the level it performed during development.

“The most accurate models employ narrow datasets, but these do not necessarily mimic what we see in practice,” she said.

In addition, even when an AI-based system is working for a given task, it must be monitored over time. Dr. Rotemberg warned about the potential for “data drift,” which describes the slow evolution in how diseases present, their prevalence by age, or other factors that might affect AI performance. She explained that repeated validation is needed to ensure that the AI-based models remain as accurate over time as they were when first used.

Many of these concepts were explored in a consensus statement from the International Skin Imaging Collaboration AI Working Group, published in JAMA Dermatology in December 2021. The statement, of which Dr. Rotemberg was a coauthor, provided recommendations for the principles of AI algorithm development specific to dermatologic considerations.

At the AAD symposium, Dr. Rotemberg asked the audience for suggestions about the needs they hoped AI might address for in office care or efficiency. Their responses included generating prior authorizations for prescriptions, triaging email for importance, and helping to improve efficiency for common front desk tasks. She liked all of these suggestions, but she warned that as powerful as it can be, AI is not likely to provide technology that will fit seamlessly into workflows without adjustment.

“Our current systems do not allow human integration of AI models,” Dr. Rotemberg said. Rather than counting on AI to adapt to current practices, she cautioned that “we may have to redesign our entire structure to actually be able to accommodate AI-based” systems. The risk for users is tasks that become more challenging before they become easier. 


AI Should Not Be a Black Box

AI is promising, but it is not magic, according to other investigators, including Tofunmi A. Omiye, PhD, a postdoctoral scholar in dermatology at Stanford University, California. First author of a recent review of AI in dermatology published in Frontiers in Medicine, Dr. Omiye agreed that clinicians who want to employ AI should be able to understand basic principles if they want the technology to perform as expected.

“I totally agree that physicians should at least have a basic understanding of the data sources for training AI models as we have found that to be important to the performance of these models in the clinical setting,” he told this news organization.

“Beyond understanding the data sources, I believe physicians can also try to have a comprehensive understanding of what AI means, its training process, and evaluation as this will help them to evaluate its utility in their practice,” he added. He also reinforced the relevance of data drift.

“Concepts like distribution shift — where models perform less well over time due to changes in the patient population — are also important to keep in mind,” Dr. Omiye said.

Dr. Wongvibulsin, Dr. Rotemberg, and Dr. Omiye reported no potential financial conflicts of interest relevant to this topic. 

A version of this article appeared on Medscape.com .

SAN DIEGO — Just a day before the annual meeting of the American Academy of Dermatology (AAD) began, a study was published online in JAMA Dermatology, cautioning that most downloadable mobile apps driven by artificial intelligence (AI) for use in monitoring dermatologic conditions lack validation.

Not least of the problems among the 41 apps evaluated, the majority offered no supporting evidence, no information about whether the app performance had been validated, and no information about how user privacy would be managed, reported Shannon Wongvibulsin, MD, PhD, a resident in the dermatology program at the University of California, Los Angeles, and her coauthors.

The findings from this report were also summarized in a poster at the AAD meeting, and the major themes were reiterated in several AAD symposia devoted to AI at the meeting. Veronica Rotemberg, MD, PhD, a dermatologist at Memorial Sloan Kettering Cancer Center, New York City, was one of those who weighed in on the future of AI. Although she was the senior author of the report, she did not address the report or poster directly, but her presentation on the practical aspects of incorporating AI into dermatology practice revisited several of its themes. 

Of the different themes, perhaps the most important were the concept that the source of AI data matters and the point that practicing clinicians should be familiar with the data source.

To date, “there is not much transparency in what data AI models are using,” Dr. Rotemberg said at the meeting. Based on the expectation that dermatologists will be purchasing rather than developing their own AI-based systems, she reiterated more than once that “transparency of data is critical,” even if vendors are often reluctant to reveal how their proprietary systems have been developed.

Few Dermatology Apps Are Vetted for Accuracy

In the poster and in the more detailed JAMA Dermatology paper, Dr. Wongvibulsin and her coinvestigators evaluated direct-to-consumer downloadable apps that claim to help with the assessment and management of skin conditions. Very few provided any supporting evidence of accuracy or even information about how they functioned.

The 41 apps were drawn from more than 300 apps; the others were excluded for failing to meet such criteria as failing to employ AI, not being available in English, or not addressing clinical management of dermatologic diseases. Dr. Wongvibulsin pointed out that none of the apps had been granted regulatory approval even though only two provided a disclaimer to that effect.

In all, just 5 of the 41 provided supporting evidence from a peer-reviewed journal, and less than 40% were created with any input from a dermatologist, Dr. Wongvibulsin reported. The result is that the utility and accuracy of these apps were, for the most part, difficult to judge.

“At a minimum, app developers should provide details on what AI algorithms are used, what data sets were used for training, testing, and validation, whether there was any clinician input, whether there are any supporting publications, how user-submitted images are used, and if there are any measures used to ensure data privacy,” Dr. Wongvibulsin wrote in the poster.

For AI-based apps or systems designed for use by dermatologists, Dr. Rotemberg made similar assertions in her overview of what clinicians should be considering for proprietary AI systems, whether to help with diagnosis or improve office efficiency.
 

 

 

Only One Dermatology App Cleared By the FDA

Currently, the only FDA-cleared app for dermatologic use is the DermaSensor, an AI-driven device. It was cleared for use in January 2024 for the evaluation of skin lesions “suggestive” of melanomabasal cell carcinoma, and/or squamous cell carcinoma in patients aged ≥ 40 years “to assist health care providers in determining whether to refer a patient to a dermatologist,” according to an FDA announcement.

Using elastic scattering spectroscopy to analyze light reflecting off the skin to detect malignancy, the manufacturer’s promotional material claims a 96% sensitivity and a 97% specificity. 

While Dr. Rotemberg did not comment on these claims, she cautioned that AI models differ with regards to how they were trained and the relative heterogeneity of the training dataset defined by types of patients, types of skin, and types of AI learning processes. All of these variables are relevant in whether the AI will perform in a given clinical setting at the level it performed during development.

“The most accurate models employ narrow datasets, but these do not necessarily mimic what we see in practice,” she said.

In addition, even when an AI-based system is working for a given task, it must be monitored over time. Dr. Rotemberg warned about the potential for “data drift,” which describes the slow evolution in how diseases present, their prevalence by age, or other factors that might affect AI performance. She explained that repeated validation is needed to ensure that the AI-based models remain as accurate over time as they were when first used.

Many of these concepts were explored in a consensus statement from the International Skin Imaging Collaboration AI Working Group, published in JAMA Dermatology in December 2021. The statement, of which Dr. Rotemberg was a coauthor, provided recommendations for the principles of AI algorithm development specific to dermatologic considerations.

At the AAD symposium, Dr. Rotemberg asked the audience for suggestions about the needs they hoped AI might address for in office care or efficiency. Their responses included generating prior authorizations for prescriptions, triaging email for importance, and helping to improve efficiency for common front desk tasks. She liked all of these suggestions, but she warned that as powerful as it can be, AI is not likely to provide technology that will fit seamlessly into workflows without adjustment.

“Our current systems do not allow human integration of AI models,” Dr. Rotemberg said. Rather than counting on AI to adapt to current practices, she cautioned that “we may have to redesign our entire structure to actually be able to accommodate AI-based” systems. The risk for users is tasks that become more challenging before they become easier. 


AI Should Not Be a Black Box

AI is promising, but it is not magic, according to other investigators, including Tofunmi A. Omiye, PhD, a postdoctoral scholar in dermatology at Stanford University, California. First author of a recent review of AI in dermatology published in Frontiers in Medicine, Dr. Omiye agreed that clinicians who want to employ AI should be able to understand basic principles if they want the technology to perform as expected.

“I totally agree that physicians should at least have a basic understanding of the data sources for training AI models as we have found that to be important to the performance of these models in the clinical setting,” he told this news organization.

“Beyond understanding the data sources, I believe physicians can also try to have a comprehensive understanding of what AI means, its training process, and evaluation as this will help them to evaluate its utility in their practice,” he added. He also reinforced the relevance of data drift.

“Concepts like distribution shift — where models perform less well over time due to changes in the patient population — are also important to keep in mind,” Dr. Omiye said.

Dr. Wongvibulsin, Dr. Rotemberg, and Dr. Omiye reported no potential financial conflicts of interest relevant to this topic. 

A version of this article appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAD 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article