SAN DIEGO — Just a day before the annual meeting of the American Academy of Dermatology (AAD) began,
.Not least of the problems among the 41 apps evaluated, the majority offered no supporting evidence, no information about whether the app performance had been validated, and no information about how user privacy would be managed, reported Shannon Wongvibulsin, MD, PhD, a resident in the dermatology program at the University of California, Los Angeles, and her coauthors.
The findings from this report were also summarized in a poster at the AAD meeting, and the major themes were reiterated in several AAD symposia devoted to AI at the meeting. Veronica Rotemberg, MD, PhD, a dermatologist at Memorial Sloan Kettering Cancer Center, New York City, was one of those who weighed in on the future of AI. Although she was the senior author of the report, she did not address the report or poster directly, but her presentation on the practical aspects of incorporating AI into dermatology practice revisited several of its themes.
Of the different themes, perhaps the most important were the concept that the source of AI data matters and the point that practicing clinicians should be familiar with the data source.
To date, “there is not much transparency in what data AI models are using,” Dr. Rotemberg said at the meeting. Based on the expectation that dermatologists will be purchasing rather than developing their own AI-based systems, she reiterated more than once that “transparency of data is critical,” even if vendors are often reluctant to reveal how their proprietary systems have been developed.
Few Dermatology Apps Are Vetted for Accuracy
In the poster and in the more detailed JAMA Dermatology paper, Dr. Wongvibulsin and her coinvestigators evaluated direct-to-consumer downloadable apps that claim to help with the assessment and management of skin conditions. Very few provided any supporting evidence of accuracy or even information about how they functioned.
The 41 apps were drawn from more than 300 apps; the others were excluded for failing to meet such criteria as failing to employ AI, not being available in English, or not addressing clinical management of dermatologic diseases. Dr. Wongvibulsin pointed out that none of the apps had been granted regulatory approval even though only two provided a disclaimer to that effect.
In all, just 5 of the 41 provided supporting evidence from a peer-reviewed journal, and less than 40% were created with any input from a dermatologist, Dr. Wongvibulsin reported. The result is that the utility and accuracy of these apps were, for the most part, difficult to judge.
“At a minimum, app developers should provide details on what AI algorithms are used, what data sets were used for training, testing, and validation, whether there was any clinician input, whether there are any supporting publications, how user-submitted images are used, and if there are any measures used to ensure data privacy,” Dr. Wongvibulsin wrote in the poster.
For AI-based apps or systems designed for use by dermatologists, Dr. Rotemberg made similar assertions in her overview of what clinicians should be considering for proprietary AI systems, whether to help with diagnosis or improve office efficiency.