Article Type
Changed
Mon, 02/05/2024 - 10:40
Display Headline
The Potential for Artificial Intelligence Tools in Residency Recruitment
IN PARTNERSHIP WITH THE ASSOCIATION OF PROFESSORS OF DERMATOLOGY RESIDENCY PROGRAM DIRECTORS SECTION

According to Electronic Residency Application Service (ERAS) statistics, there were more than 1400 dermatology applicants in 2022, with an average of almost 560 applications received per program.1,2 With the goal to expand the diversity of board-certified dermatologists, there is increasing emphasis on the holistic review of applications, forgoing filtering by discrete metrics such as AOA (American Osteopathic Association) membership and US Medical Licensing Examination (USMLE) scores.3 According to the Association of American Medical Colleges, holistic review focuses on an individual applicant’s experience and unique attributes in addition to their academic achievements.4 Recent strategies to enhance the residency recruitment process have included the introduction of standardized letters of recommendation, preference signaling, and supplemental applications.5,6

Because it has become increasingly important to include applicant factors and achievements that extend beyond academics, the number of data points that are required for holistic review has expanded. If each application required 20 minutes to review, this would result in 166 total hours for complete holistic review of 500 applications. Tools that can facilitate holistic review of candidates and select applicants whose interests and career goals align with individual residency programs have the potential to optimize review. Artificial intelligence (AI) may aid in this process. This column highlights some of the published research on novel AI strategies that have the potential to impact dermatology residency recruitment.

Machine Learning to Screen Applicants

Artificial intelligence involves a machine-based system that can make decisions, predictions, and recommendations when provided a given set of human-defined objectives.7 Autonomous systems, machine learning (ML), and generative AI are examples of AI models.8 Machine learning has been explored to shorten and streamline the application review process and decrease bias. Because ML is a model in which the computer learns patterns based on large amounts of input data,9 it is possible that models could be developed and used in future cycles. Some studies found that applicants were discovered who traditionally would not have made it to the next stage of consideration based primarily on academic metrics.10,11 Burk-Rafel et al10 developed and validated an ML-based decision support tool for residency program directors to use for interview invitation decisions. The tool utilized 61 variables from ERAS data from more than 8000 applications in 3 prior application cycles at a single internal medicine residency program. An interview invitation was designated as the target outcome. Ultimately, the model would output a probability score for an interview invitation. The authors were able to tune the model to a 91% sensitivity and 85% specificity; for a pool of 2000 applicants and an invite rate of 15%, 1475 applicants would be screened out with a negative predictive value of 98% with maintenance of performance, even with removal of USMLE Step 1 examination scores. Their ML model was prospectively validated during an ongoing resident selection cycle, and when compared with human review, the AI model found an additional 20 applicants to invite for interviews. They concluded that this tool could potentially augment the human review process and reveal applicants who may have otherwise been overlooked.10

Rees and Ryder11 utilized another ML screening approach with the target outcome of ranked and matriculated compared with ranked applicants based on ERAS data using 72 unique variables for more than 5000 applicants. Their model was able to identify ranked candidates from the overall applicant pool with high accuracy; identification of ranked applicants that matriculated at the program was more modest but better than random probability.11Both the Burk-Rafel et al10 and Rees and Ryder11 models excluded some unstructured data components of the residency application, such as personal statements, medical student performance evaluation letters, transcripts, and letters of reference, that some may consider strongly in the holistic review process. Drum et al12 explored the value of extraction of this type of data. They created a program to extract “snippets” of text that pertained to values of successful residents for their internal medicine–pediatrics residency program that they previously validated via a modified Delphi method, which then were annotated by expert reviewers. Natural language processing was used to train an ML algorithm (MLA) to classify snippets into 11 value categories. Four values had more than 66% agreement with human annotation: academic strength; leadership; communication; and justice, equity, diversity, and inclusion. Although this MLA has not reached high enough levels of agreement for all the predetermined success values, the authors hope to generate a model that could produce a quantitative score to use as an initial screening tool to select applicants for interview.12 This type of analysis also could be incorporated into other MLAs for further refinement of the mentoring and application process.

Knapke et al13 evaluated the use of a natural language modeling platform to look for semantic patterns in medical school applications that could predict which students would be more likely to pursue family medicine residency, thus beginning the recruitment process even before residency application. This strategy could be particularly valuable for specialties for which there may be greater need in the workforce.

AI for Administrative Purposes

Artificial intelligence also has been used for nonapplication aspects of the residency recruitment process, such as interview scheduling. In the absence of coordinated interview release dates (as was implemented in dermatology starting in the 2020-2021 application cycle), a deluge of responses to schedule an interview comes flooding in as soon as invitations for interviewees are sent out, which can produce anxiety both for applicants and residency program staff as the schedule is sorted out and can create delays at both ends. Stephens et al14 utilized a computerized scheduling program for pediatric surgery fellowship applicants. It was used in 2016 to schedule 26 interviews, and it was found to reduce the average time to schedule an interview from 14.4 hours to 1.7 hours. It also reduced the number of email exchanges needed to finalize scheduling.14

Another aspect of residency recruitment that is amenable to AI is information gathering. Many would-be applicants turn to the internet and social media to learn about residency programs—their unique qualities, assets, and potential alignment of career goals.15 This exchange often is unidirectional, as the applicant clicks through the website searching for information. Yi et al16 explored the use of a chatbot, which mimics human conversation and exchange, on their institution’s pain fellowship website. Fellowship applicants could create specific prompts, such as “Show me faculty that trained at <applicant’s home program>,” and the chatbot would reply with the answer. The researchers sent a survey to all 258 applicants to the pain fellowship program that was completed by 48 applicants. Of these respondents, more than 70% (35/48) utilized the chatbot, and 84% (40/48) stated that they had found the information that was requested. The respondents overall found the chatbot to be a useful and positive experience.16

 

 

Specific Tools to Consider

There are some tools that are publicly available for programs and applicants to use that rely on AI.

In collaboration with ERAS and the Association of American Medical Colleges, Cortex powered by Thalamus (SJ MedConnect Inc)(https://thalamusgme.com/cortex-application-screening/) offers technology-assisted holistic review of residency and fellowship applications by utilizing natural language processing and optical character recognition to aggregate data from ERAS.

Tools also are being leveraged by applicants to help them find residency programs that fit their criteria, prepare for interviews, and complete portions of the application. Match A Resident (https://www.matcharesident.com/) is a resource for the international medical graduate community. As part of the service, the “Learn More with MARai” feature uses AI to generate information on residency programs to increase applicants’ confidence going into the interview process.17 Big Interview Medical (https://www.biginterviewmedical.com/ai-feedback), a paid interview preparation system developed by interview experts, utilizes AI to provide feedback to residents practicing for the interview process by measuring the amount of natural eye contact, language used, and pace of speech. A “Power Word” score is provided that incorporates aspects such as using filler words (“umm,” “uhh”). A Pace of Speech Tool provides rate of speaking feedback presuming that there is an ideal pace to decrease the impression that the applicant is nervous. Johnstone et al18 used ChatGPT (https://chat.openai.com/auth/login) to generate 2 personal statements for anesthesia residency applicants. Based on survey responses from 31 program directors, 22 rated the statements as good or excellent.18

Ethnical Concerns and Limitations of AI

The potential use of AI tools by residency applicants inevitably brings forth consideration of biases, ethics, and current limitations. These tools are highly dependent on the quality and quantity of data used for training and validation. Information considered valuable in the holistic review of applications includes unstructured data such as personal statements and letters of recommendation, and incorporating this information can be challenging in ML models, in contrast to discrete structured data such as grades, test scores, and awards. In addition, MLAs depend on large quantities of data to optimize performance.19 Depending on the size of the applicant pool and the amount of data available, this can present a limitation for smaller programs in developing ML tools for residency recruitment. Studies evaluating the use of AI in the residency application process often are from single institutions, and therefore generalizability is uncertain. The risk for latent bias—whereby a historical or pre-existing stereotype gets perpetuated through the system—must be considered, with the development of tools to detect and address this if found. Choosing which data to use to train the model can be tricky as well as choosing the outcome of interest. For these interventions to become more resilient, programs need to self-examine what defines their criteria for a successful match to their program to incorporate this data into their ML studies. The previously described models in this overview focused on outcomes such as whether an applicant was invited to interview, whether the applicant was ranked, and whether the applicant matriculated to their program.10,11 For supervised ML models that rely on outcomes to develop a prediction, continued research as to what outcomes represent resident success (eg, passing board certification examinations, correlation with clinical performance) would be important. There also is the possibility of applicants restructuring their applications to align with goals of an AI-assisted search and using AI to generate part or all of their application. The use of ChatGPT and other AI tools in the preparation of personal statements and curriculum vitae may provide benefits such as improved efficiency and grammar support.20 However, as use becomes more widespread, there is the potential increased similarity of personal statements and likely varied opinions on the use of such tools as writing aids.21,22 Continued efforts to develop guidance on generative AI use cases is ongoing; an example is the launch of VALID AI (https://validai.health/), a collaboration among health systems, health plans, and AI research organizations and nonprofits.23

Final Thoughts

Artificial intelligence tools may be a promising resource for residency and fellowship programs seeking to find meaningful ways to select applicants who are good matches for their training environment. Prioritizing the holistic review of applications has been promoted as a method to evaluate the applicant beyond their test scores and grades. The use of MLAs may streamline this review process, aid in scheduling interviews, and help discover trends in successful matriculants.

References
  1. Association of American Medical Colleges. ERAS® Statistics. Accessed January 16, 2024. https://www.aamc.org/data-reports/data/eras-statistics-data
  2. National Resident Matching Program, Data Release and ResearchCommittee: Results of the 2022 NRMP Program Director Survey. Accessed January 18, 2024. https://www.nrmp.org/wp-content/uploads/2022/09/PD-Survey-Report-2022_FINALrev.pdf
  3. Isaq NA, Bowers S, Chen ST. Taking a “step” toward diversity in dermatology: de-emphasizing USMLE Step 1 scores in residency applications. Int J Womens Dermatol. 2020;6:209-210. doi:10.1016/j.ijwd.2020.02.008
  4. Association of American Medical Colleges. Holistic review in medical school admissions. Accessed January 16, 2024. https://students-residents.aamc.org/choosing-medical-career/holistic-review-medical-school-admissions
  5. Association of American Medical Colleges. The MyERAS® application and program signaling for 2023-24. Accessed January 16, 2024. https://students-residents.aamc.org/applying-residencies-eras/myeras-application-and-program-signaling-2023-24
  6. Tavarez MM, Baghdassarian A, Bailey J, et al. A call to action for standardizing letters of recommendation. J Grad Med Educ. 2022;14:642-646. doi:10.4300/JGME-D-22-00131.1
  7. US Department of State. Artificial intelligence (AI). Accessed January 16, 2024. https://www.state.gov/artificial-intelligence/
  8. Stanford University Human-Centered Artificial Intelligence. Artificial intelligence definitions. Accessed January 16, 2024.https://hai.stanford.edu/sites/default/files/2023-03/AI-Key-Terms-Glossary-Definition.pdf
  9. Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. 2019;380:1347-1358. doi:10.1056/NEJMra1814259
  10. Burk-Rafel J, Reinstein I, Feng J, et al. Development and validation of a machine learning-based decision support tool for residency applicant screening and review. Acad Med. 2021;96(11S):S54-S61. doi:10.1097/ACM.0000000000004317
  11. Rees CA, Ryder HF. Machine learning for the prediction of ranked applicants and matriculants to an internal medicine residency program. Teach Learn Med. 2023;35:277-286. doi:10.1080/10401334.2022.2059664
  12. Drum B, Shi J, Peterson B, et al. Using natural language processing and machine learning to identify internal medicine-pediatrics residency values in applications. Acad Med. 2023;98:1278-1282. doi:10.1097/ACM.0000000000005352
  13. Knapke JM, Mount HR, McCabe E, et al. Early identification of family physicians using qualitative admissions data. Fam Med. 2023;55:245-252. doi:10.22454/FamMed.2023.596964
  14. Stephens CQ, Hamilton NA, Thompson AE, et al. Use of computerized interview scheduling program for pediatric surgery match applicants. J Pediatr Surg. 2017;52:1056-1059. doi:10.1016/j.jpedsurg.2017.03.033
  15. Nickles MA, Kulkarni V, Varghese JA, et al. Dermatology residency programs’ websites in the virtual era: a cross-sectional analysis. J Am Acad Dermatol. 2022;86:447-448. doi:10.1016/j.jaad.2021.09.064
  16. Yi PK, Ray ND, Segall N. A novel use of an artificially intelligent Chatbot and a live, synchronous virtual question-and answer session for fellowship recruitment. BMC Med Educ. 2023;23:152. doi:10.1186/s12909-022-03872-z
  17. Introducing “Learn More with MARai”—the key to understanding your target residency programs. Match A Resident website. Published September 23, 2023. Accessed January 16, 2024. https://blog.matcharesident.com/ai-powered-residency-insights/
  18. Johnstone RE, Neely G, Sizemore DC. Artificial intelligence softwarecan generate residency application personal statements that program directors find acceptable and difficult to distinguish from applicant compositions. J Clin Anesth. 2023;89:111185. doi:10.1016/j.jclinane.2023.111185
  19. Khalid N, Qayyum A, Bilal M, et al. Privacy-preserving artificial intelligence in healthcare: techniques and applications. Comput Biol Med. 2023;158:106848. doi:10.1016/j.compbiomed.2023.106848
  20. Birt J. How to optimize your resume for AI scanners (with tips). Indeed website. Updated December 30, 2022. Accessed January 16, 2024. https://www.indeed.com/career-advice/resumes-cover-letters/resume-ai
  21. Patel V, Deleonibus A, Wells MW, et al. Distinguishing authentic voices in the age of ChatGPT: comparing AI-generated and applicant-written personal statements for plastic surgery residency application. Ann Plast Surg. 2023;91:324-325. doi:10.1097/SAP.0000000000003653
  22. Woodfin MW. The personal statement in the age of artificial intelligence. Acad Med. 2023;98:869. doi:10.1097/ACM.0000000000005266
  23. Diaz N. UC Davis Health to lead new gen AI collaborative. Beckers Healthcare website. Published October 10, 2023. AccessedJanuary 16, 2024. https://www.beckershospitalreview.com/digital-health/uc-davis-health-to-lead-new-gen-ai-collaborative.html
Article PDF
Author and Disclosure Information

From the University of Chicago Medicine, Section of Dermatology, Department of Medicine, Chicago, Illinois.

The author reports no conflict of interest.

Correspondence: Arlene M. Ruiz de Luzuriaga, MD, MPH, MBA, University of Chicago Medicine, 5841 S Maryland Ave, MC 5067, Chicago,IL 60637-1447 (aruizde@bsd.uchicago.edu).

Issue
Cutis - 113(2)
Publications
Topics
Page Number
56-59
Sections
Author and Disclosure Information

From the University of Chicago Medicine, Section of Dermatology, Department of Medicine, Chicago, Illinois.

The author reports no conflict of interest.

Correspondence: Arlene M. Ruiz de Luzuriaga, MD, MPH, MBA, University of Chicago Medicine, 5841 S Maryland Ave, MC 5067, Chicago,IL 60637-1447 (aruizde@bsd.uchicago.edu).

Author and Disclosure Information

From the University of Chicago Medicine, Section of Dermatology, Department of Medicine, Chicago, Illinois.

The author reports no conflict of interest.

Correspondence: Arlene M. Ruiz de Luzuriaga, MD, MPH, MBA, University of Chicago Medicine, 5841 S Maryland Ave, MC 5067, Chicago,IL 60637-1447 (aruizde@bsd.uchicago.edu).

Article PDF
Article PDF
IN PARTNERSHIP WITH THE ASSOCIATION OF PROFESSORS OF DERMATOLOGY RESIDENCY PROGRAM DIRECTORS SECTION
IN PARTNERSHIP WITH THE ASSOCIATION OF PROFESSORS OF DERMATOLOGY RESIDENCY PROGRAM DIRECTORS SECTION

According to Electronic Residency Application Service (ERAS) statistics, there were more than 1400 dermatology applicants in 2022, with an average of almost 560 applications received per program.1,2 With the goal to expand the diversity of board-certified dermatologists, there is increasing emphasis on the holistic review of applications, forgoing filtering by discrete metrics such as AOA (American Osteopathic Association) membership and US Medical Licensing Examination (USMLE) scores.3 According to the Association of American Medical Colleges, holistic review focuses on an individual applicant’s experience and unique attributes in addition to their academic achievements.4 Recent strategies to enhance the residency recruitment process have included the introduction of standardized letters of recommendation, preference signaling, and supplemental applications.5,6

Because it has become increasingly important to include applicant factors and achievements that extend beyond academics, the number of data points that are required for holistic review has expanded. If each application required 20 minutes to review, this would result in 166 total hours for complete holistic review of 500 applications. Tools that can facilitate holistic review of candidates and select applicants whose interests and career goals align with individual residency programs have the potential to optimize review. Artificial intelligence (AI) may aid in this process. This column highlights some of the published research on novel AI strategies that have the potential to impact dermatology residency recruitment.

Machine Learning to Screen Applicants

Artificial intelligence involves a machine-based system that can make decisions, predictions, and recommendations when provided a given set of human-defined objectives.7 Autonomous systems, machine learning (ML), and generative AI are examples of AI models.8 Machine learning has been explored to shorten and streamline the application review process and decrease bias. Because ML is a model in which the computer learns patterns based on large amounts of input data,9 it is possible that models could be developed and used in future cycles. Some studies found that applicants were discovered who traditionally would not have made it to the next stage of consideration based primarily on academic metrics.10,11 Burk-Rafel et al10 developed and validated an ML-based decision support tool for residency program directors to use for interview invitation decisions. The tool utilized 61 variables from ERAS data from more than 8000 applications in 3 prior application cycles at a single internal medicine residency program. An interview invitation was designated as the target outcome. Ultimately, the model would output a probability score for an interview invitation. The authors were able to tune the model to a 91% sensitivity and 85% specificity; for a pool of 2000 applicants and an invite rate of 15%, 1475 applicants would be screened out with a negative predictive value of 98% with maintenance of performance, even with removal of USMLE Step 1 examination scores. Their ML model was prospectively validated during an ongoing resident selection cycle, and when compared with human review, the AI model found an additional 20 applicants to invite for interviews. They concluded that this tool could potentially augment the human review process and reveal applicants who may have otherwise been overlooked.10

Rees and Ryder11 utilized another ML screening approach with the target outcome of ranked and matriculated compared with ranked applicants based on ERAS data using 72 unique variables for more than 5000 applicants. Their model was able to identify ranked candidates from the overall applicant pool with high accuracy; identification of ranked applicants that matriculated at the program was more modest but better than random probability.11Both the Burk-Rafel et al10 and Rees and Ryder11 models excluded some unstructured data components of the residency application, such as personal statements, medical student performance evaluation letters, transcripts, and letters of reference, that some may consider strongly in the holistic review process. Drum et al12 explored the value of extraction of this type of data. They created a program to extract “snippets” of text that pertained to values of successful residents for their internal medicine–pediatrics residency program that they previously validated via a modified Delphi method, which then were annotated by expert reviewers. Natural language processing was used to train an ML algorithm (MLA) to classify snippets into 11 value categories. Four values had more than 66% agreement with human annotation: academic strength; leadership; communication; and justice, equity, diversity, and inclusion. Although this MLA has not reached high enough levels of agreement for all the predetermined success values, the authors hope to generate a model that could produce a quantitative score to use as an initial screening tool to select applicants for interview.12 This type of analysis also could be incorporated into other MLAs for further refinement of the mentoring and application process.

Knapke et al13 evaluated the use of a natural language modeling platform to look for semantic patterns in medical school applications that could predict which students would be more likely to pursue family medicine residency, thus beginning the recruitment process even before residency application. This strategy could be particularly valuable for specialties for which there may be greater need in the workforce.

AI for Administrative Purposes

Artificial intelligence also has been used for nonapplication aspects of the residency recruitment process, such as interview scheduling. In the absence of coordinated interview release dates (as was implemented in dermatology starting in the 2020-2021 application cycle), a deluge of responses to schedule an interview comes flooding in as soon as invitations for interviewees are sent out, which can produce anxiety both for applicants and residency program staff as the schedule is sorted out and can create delays at both ends. Stephens et al14 utilized a computerized scheduling program for pediatric surgery fellowship applicants. It was used in 2016 to schedule 26 interviews, and it was found to reduce the average time to schedule an interview from 14.4 hours to 1.7 hours. It also reduced the number of email exchanges needed to finalize scheduling.14

Another aspect of residency recruitment that is amenable to AI is information gathering. Many would-be applicants turn to the internet and social media to learn about residency programs—their unique qualities, assets, and potential alignment of career goals.15 This exchange often is unidirectional, as the applicant clicks through the website searching for information. Yi et al16 explored the use of a chatbot, which mimics human conversation and exchange, on their institution’s pain fellowship website. Fellowship applicants could create specific prompts, such as “Show me faculty that trained at <applicant’s home program>,” and the chatbot would reply with the answer. The researchers sent a survey to all 258 applicants to the pain fellowship program that was completed by 48 applicants. Of these respondents, more than 70% (35/48) utilized the chatbot, and 84% (40/48) stated that they had found the information that was requested. The respondents overall found the chatbot to be a useful and positive experience.16

 

 

Specific Tools to Consider

There are some tools that are publicly available for programs and applicants to use that rely on AI.

In collaboration with ERAS and the Association of American Medical Colleges, Cortex powered by Thalamus (SJ MedConnect Inc)(https://thalamusgme.com/cortex-application-screening/) offers technology-assisted holistic review of residency and fellowship applications by utilizing natural language processing and optical character recognition to aggregate data from ERAS.

Tools also are being leveraged by applicants to help them find residency programs that fit their criteria, prepare for interviews, and complete portions of the application. Match A Resident (https://www.matcharesident.com/) is a resource for the international medical graduate community. As part of the service, the “Learn More with MARai” feature uses AI to generate information on residency programs to increase applicants’ confidence going into the interview process.17 Big Interview Medical (https://www.biginterviewmedical.com/ai-feedback), a paid interview preparation system developed by interview experts, utilizes AI to provide feedback to residents practicing for the interview process by measuring the amount of natural eye contact, language used, and pace of speech. A “Power Word” score is provided that incorporates aspects such as using filler words (“umm,” “uhh”). A Pace of Speech Tool provides rate of speaking feedback presuming that there is an ideal pace to decrease the impression that the applicant is nervous. Johnstone et al18 used ChatGPT (https://chat.openai.com/auth/login) to generate 2 personal statements for anesthesia residency applicants. Based on survey responses from 31 program directors, 22 rated the statements as good or excellent.18

Ethnical Concerns and Limitations of AI

The potential use of AI tools by residency applicants inevitably brings forth consideration of biases, ethics, and current limitations. These tools are highly dependent on the quality and quantity of data used for training and validation. Information considered valuable in the holistic review of applications includes unstructured data such as personal statements and letters of recommendation, and incorporating this information can be challenging in ML models, in contrast to discrete structured data such as grades, test scores, and awards. In addition, MLAs depend on large quantities of data to optimize performance.19 Depending on the size of the applicant pool and the amount of data available, this can present a limitation for smaller programs in developing ML tools for residency recruitment. Studies evaluating the use of AI in the residency application process often are from single institutions, and therefore generalizability is uncertain. The risk for latent bias—whereby a historical or pre-existing stereotype gets perpetuated through the system—must be considered, with the development of tools to detect and address this if found. Choosing which data to use to train the model can be tricky as well as choosing the outcome of interest. For these interventions to become more resilient, programs need to self-examine what defines their criteria for a successful match to their program to incorporate this data into their ML studies. The previously described models in this overview focused on outcomes such as whether an applicant was invited to interview, whether the applicant was ranked, and whether the applicant matriculated to their program.10,11 For supervised ML models that rely on outcomes to develop a prediction, continued research as to what outcomes represent resident success (eg, passing board certification examinations, correlation with clinical performance) would be important. There also is the possibility of applicants restructuring their applications to align with goals of an AI-assisted search and using AI to generate part or all of their application. The use of ChatGPT and other AI tools in the preparation of personal statements and curriculum vitae may provide benefits such as improved efficiency and grammar support.20 However, as use becomes more widespread, there is the potential increased similarity of personal statements and likely varied opinions on the use of such tools as writing aids.21,22 Continued efforts to develop guidance on generative AI use cases is ongoing; an example is the launch of VALID AI (https://validai.health/), a collaboration among health systems, health plans, and AI research organizations and nonprofits.23

Final Thoughts

Artificial intelligence tools may be a promising resource for residency and fellowship programs seeking to find meaningful ways to select applicants who are good matches for their training environment. Prioritizing the holistic review of applications has been promoted as a method to evaluate the applicant beyond their test scores and grades. The use of MLAs may streamline this review process, aid in scheduling interviews, and help discover trends in successful matriculants.

According to Electronic Residency Application Service (ERAS) statistics, there were more than 1400 dermatology applicants in 2022, with an average of almost 560 applications received per program.1,2 With the goal to expand the diversity of board-certified dermatologists, there is increasing emphasis on the holistic review of applications, forgoing filtering by discrete metrics such as AOA (American Osteopathic Association) membership and US Medical Licensing Examination (USMLE) scores.3 According to the Association of American Medical Colleges, holistic review focuses on an individual applicant’s experience and unique attributes in addition to their academic achievements.4 Recent strategies to enhance the residency recruitment process have included the introduction of standardized letters of recommendation, preference signaling, and supplemental applications.5,6

Because it has become increasingly important to include applicant factors and achievements that extend beyond academics, the number of data points that are required for holistic review has expanded. If each application required 20 minutes to review, this would result in 166 total hours for complete holistic review of 500 applications. Tools that can facilitate holistic review of candidates and select applicants whose interests and career goals align with individual residency programs have the potential to optimize review. Artificial intelligence (AI) may aid in this process. This column highlights some of the published research on novel AI strategies that have the potential to impact dermatology residency recruitment.

Machine Learning to Screen Applicants

Artificial intelligence involves a machine-based system that can make decisions, predictions, and recommendations when provided a given set of human-defined objectives.7 Autonomous systems, machine learning (ML), and generative AI are examples of AI models.8 Machine learning has been explored to shorten and streamline the application review process and decrease bias. Because ML is a model in which the computer learns patterns based on large amounts of input data,9 it is possible that models could be developed and used in future cycles. Some studies found that applicants were discovered who traditionally would not have made it to the next stage of consideration based primarily on academic metrics.10,11 Burk-Rafel et al10 developed and validated an ML-based decision support tool for residency program directors to use for interview invitation decisions. The tool utilized 61 variables from ERAS data from more than 8000 applications in 3 prior application cycles at a single internal medicine residency program. An interview invitation was designated as the target outcome. Ultimately, the model would output a probability score for an interview invitation. The authors were able to tune the model to a 91% sensitivity and 85% specificity; for a pool of 2000 applicants and an invite rate of 15%, 1475 applicants would be screened out with a negative predictive value of 98% with maintenance of performance, even with removal of USMLE Step 1 examination scores. Their ML model was prospectively validated during an ongoing resident selection cycle, and when compared with human review, the AI model found an additional 20 applicants to invite for interviews. They concluded that this tool could potentially augment the human review process and reveal applicants who may have otherwise been overlooked.10

Rees and Ryder11 utilized another ML screening approach with the target outcome of ranked and matriculated compared with ranked applicants based on ERAS data using 72 unique variables for more than 5000 applicants. Their model was able to identify ranked candidates from the overall applicant pool with high accuracy; identification of ranked applicants that matriculated at the program was more modest but better than random probability.11Both the Burk-Rafel et al10 and Rees and Ryder11 models excluded some unstructured data components of the residency application, such as personal statements, medical student performance evaluation letters, transcripts, and letters of reference, that some may consider strongly in the holistic review process. Drum et al12 explored the value of extraction of this type of data. They created a program to extract “snippets” of text that pertained to values of successful residents for their internal medicine–pediatrics residency program that they previously validated via a modified Delphi method, which then were annotated by expert reviewers. Natural language processing was used to train an ML algorithm (MLA) to classify snippets into 11 value categories. Four values had more than 66% agreement with human annotation: academic strength; leadership; communication; and justice, equity, diversity, and inclusion. Although this MLA has not reached high enough levels of agreement for all the predetermined success values, the authors hope to generate a model that could produce a quantitative score to use as an initial screening tool to select applicants for interview.12 This type of analysis also could be incorporated into other MLAs for further refinement of the mentoring and application process.

Knapke et al13 evaluated the use of a natural language modeling platform to look for semantic patterns in medical school applications that could predict which students would be more likely to pursue family medicine residency, thus beginning the recruitment process even before residency application. This strategy could be particularly valuable for specialties for which there may be greater need in the workforce.

AI for Administrative Purposes

Artificial intelligence also has been used for nonapplication aspects of the residency recruitment process, such as interview scheduling. In the absence of coordinated interview release dates (as was implemented in dermatology starting in the 2020-2021 application cycle), a deluge of responses to schedule an interview comes flooding in as soon as invitations for interviewees are sent out, which can produce anxiety both for applicants and residency program staff as the schedule is sorted out and can create delays at both ends. Stephens et al14 utilized a computerized scheduling program for pediatric surgery fellowship applicants. It was used in 2016 to schedule 26 interviews, and it was found to reduce the average time to schedule an interview from 14.4 hours to 1.7 hours. It also reduced the number of email exchanges needed to finalize scheduling.14

Another aspect of residency recruitment that is amenable to AI is information gathering. Many would-be applicants turn to the internet and social media to learn about residency programs—their unique qualities, assets, and potential alignment of career goals.15 This exchange often is unidirectional, as the applicant clicks through the website searching for information. Yi et al16 explored the use of a chatbot, which mimics human conversation and exchange, on their institution’s pain fellowship website. Fellowship applicants could create specific prompts, such as “Show me faculty that trained at <applicant’s home program>,” and the chatbot would reply with the answer. The researchers sent a survey to all 258 applicants to the pain fellowship program that was completed by 48 applicants. Of these respondents, more than 70% (35/48) utilized the chatbot, and 84% (40/48) stated that they had found the information that was requested. The respondents overall found the chatbot to be a useful and positive experience.16

 

 

Specific Tools to Consider

There are some tools that are publicly available for programs and applicants to use that rely on AI.

In collaboration with ERAS and the Association of American Medical Colleges, Cortex powered by Thalamus (SJ MedConnect Inc)(https://thalamusgme.com/cortex-application-screening/) offers technology-assisted holistic review of residency and fellowship applications by utilizing natural language processing and optical character recognition to aggregate data from ERAS.

Tools also are being leveraged by applicants to help them find residency programs that fit their criteria, prepare for interviews, and complete portions of the application. Match A Resident (https://www.matcharesident.com/) is a resource for the international medical graduate community. As part of the service, the “Learn More with MARai” feature uses AI to generate information on residency programs to increase applicants’ confidence going into the interview process.17 Big Interview Medical (https://www.biginterviewmedical.com/ai-feedback), a paid interview preparation system developed by interview experts, utilizes AI to provide feedback to residents practicing for the interview process by measuring the amount of natural eye contact, language used, and pace of speech. A “Power Word” score is provided that incorporates aspects such as using filler words (“umm,” “uhh”). A Pace of Speech Tool provides rate of speaking feedback presuming that there is an ideal pace to decrease the impression that the applicant is nervous. Johnstone et al18 used ChatGPT (https://chat.openai.com/auth/login) to generate 2 personal statements for anesthesia residency applicants. Based on survey responses from 31 program directors, 22 rated the statements as good or excellent.18

Ethnical Concerns and Limitations of AI

The potential use of AI tools by residency applicants inevitably brings forth consideration of biases, ethics, and current limitations. These tools are highly dependent on the quality and quantity of data used for training and validation. Information considered valuable in the holistic review of applications includes unstructured data such as personal statements and letters of recommendation, and incorporating this information can be challenging in ML models, in contrast to discrete structured data such as grades, test scores, and awards. In addition, MLAs depend on large quantities of data to optimize performance.19 Depending on the size of the applicant pool and the amount of data available, this can present a limitation for smaller programs in developing ML tools for residency recruitment. Studies evaluating the use of AI in the residency application process often are from single institutions, and therefore generalizability is uncertain. The risk for latent bias—whereby a historical or pre-existing stereotype gets perpetuated through the system—must be considered, with the development of tools to detect and address this if found. Choosing which data to use to train the model can be tricky as well as choosing the outcome of interest. For these interventions to become more resilient, programs need to self-examine what defines their criteria for a successful match to their program to incorporate this data into their ML studies. The previously described models in this overview focused on outcomes such as whether an applicant was invited to interview, whether the applicant was ranked, and whether the applicant matriculated to their program.10,11 For supervised ML models that rely on outcomes to develop a prediction, continued research as to what outcomes represent resident success (eg, passing board certification examinations, correlation with clinical performance) would be important. There also is the possibility of applicants restructuring their applications to align with goals of an AI-assisted search and using AI to generate part or all of their application. The use of ChatGPT and other AI tools in the preparation of personal statements and curriculum vitae may provide benefits such as improved efficiency and grammar support.20 However, as use becomes more widespread, there is the potential increased similarity of personal statements and likely varied opinions on the use of such tools as writing aids.21,22 Continued efforts to develop guidance on generative AI use cases is ongoing; an example is the launch of VALID AI (https://validai.health/), a collaboration among health systems, health plans, and AI research organizations and nonprofits.23

Final Thoughts

Artificial intelligence tools may be a promising resource for residency and fellowship programs seeking to find meaningful ways to select applicants who are good matches for their training environment. Prioritizing the holistic review of applications has been promoted as a method to evaluate the applicant beyond their test scores and grades. The use of MLAs may streamline this review process, aid in scheduling interviews, and help discover trends in successful matriculants.

References
  1. Association of American Medical Colleges. ERAS® Statistics. Accessed January 16, 2024. https://www.aamc.org/data-reports/data/eras-statistics-data
  2. National Resident Matching Program, Data Release and ResearchCommittee: Results of the 2022 NRMP Program Director Survey. Accessed January 18, 2024. https://www.nrmp.org/wp-content/uploads/2022/09/PD-Survey-Report-2022_FINALrev.pdf
  3. Isaq NA, Bowers S, Chen ST. Taking a “step” toward diversity in dermatology: de-emphasizing USMLE Step 1 scores in residency applications. Int J Womens Dermatol. 2020;6:209-210. doi:10.1016/j.ijwd.2020.02.008
  4. Association of American Medical Colleges. Holistic review in medical school admissions. Accessed January 16, 2024. https://students-residents.aamc.org/choosing-medical-career/holistic-review-medical-school-admissions
  5. Association of American Medical Colleges. The MyERAS® application and program signaling for 2023-24. Accessed January 16, 2024. https://students-residents.aamc.org/applying-residencies-eras/myeras-application-and-program-signaling-2023-24
  6. Tavarez MM, Baghdassarian A, Bailey J, et al. A call to action for standardizing letters of recommendation. J Grad Med Educ. 2022;14:642-646. doi:10.4300/JGME-D-22-00131.1
  7. US Department of State. Artificial intelligence (AI). Accessed January 16, 2024. https://www.state.gov/artificial-intelligence/
  8. Stanford University Human-Centered Artificial Intelligence. Artificial intelligence definitions. Accessed January 16, 2024.https://hai.stanford.edu/sites/default/files/2023-03/AI-Key-Terms-Glossary-Definition.pdf
  9. Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. 2019;380:1347-1358. doi:10.1056/NEJMra1814259
  10. Burk-Rafel J, Reinstein I, Feng J, et al. Development and validation of a machine learning-based decision support tool for residency applicant screening and review. Acad Med. 2021;96(11S):S54-S61. doi:10.1097/ACM.0000000000004317
  11. Rees CA, Ryder HF. Machine learning for the prediction of ranked applicants and matriculants to an internal medicine residency program. Teach Learn Med. 2023;35:277-286. doi:10.1080/10401334.2022.2059664
  12. Drum B, Shi J, Peterson B, et al. Using natural language processing and machine learning to identify internal medicine-pediatrics residency values in applications. Acad Med. 2023;98:1278-1282. doi:10.1097/ACM.0000000000005352
  13. Knapke JM, Mount HR, McCabe E, et al. Early identification of family physicians using qualitative admissions data. Fam Med. 2023;55:245-252. doi:10.22454/FamMed.2023.596964
  14. Stephens CQ, Hamilton NA, Thompson AE, et al. Use of computerized interview scheduling program for pediatric surgery match applicants. J Pediatr Surg. 2017;52:1056-1059. doi:10.1016/j.jpedsurg.2017.03.033
  15. Nickles MA, Kulkarni V, Varghese JA, et al. Dermatology residency programs’ websites in the virtual era: a cross-sectional analysis. J Am Acad Dermatol. 2022;86:447-448. doi:10.1016/j.jaad.2021.09.064
  16. Yi PK, Ray ND, Segall N. A novel use of an artificially intelligent Chatbot and a live, synchronous virtual question-and answer session for fellowship recruitment. BMC Med Educ. 2023;23:152. doi:10.1186/s12909-022-03872-z
  17. Introducing “Learn More with MARai”—the key to understanding your target residency programs. Match A Resident website. Published September 23, 2023. Accessed January 16, 2024. https://blog.matcharesident.com/ai-powered-residency-insights/
  18. Johnstone RE, Neely G, Sizemore DC. Artificial intelligence softwarecan generate residency application personal statements that program directors find acceptable and difficult to distinguish from applicant compositions. J Clin Anesth. 2023;89:111185. doi:10.1016/j.jclinane.2023.111185
  19. Khalid N, Qayyum A, Bilal M, et al. Privacy-preserving artificial intelligence in healthcare: techniques and applications. Comput Biol Med. 2023;158:106848. doi:10.1016/j.compbiomed.2023.106848
  20. Birt J. How to optimize your resume for AI scanners (with tips). Indeed website. Updated December 30, 2022. Accessed January 16, 2024. https://www.indeed.com/career-advice/resumes-cover-letters/resume-ai
  21. Patel V, Deleonibus A, Wells MW, et al. Distinguishing authentic voices in the age of ChatGPT: comparing AI-generated and applicant-written personal statements for plastic surgery residency application. Ann Plast Surg. 2023;91:324-325. doi:10.1097/SAP.0000000000003653
  22. Woodfin MW. The personal statement in the age of artificial intelligence. Acad Med. 2023;98:869. doi:10.1097/ACM.0000000000005266
  23. Diaz N. UC Davis Health to lead new gen AI collaborative. Beckers Healthcare website. Published October 10, 2023. AccessedJanuary 16, 2024. https://www.beckershospitalreview.com/digital-health/uc-davis-health-to-lead-new-gen-ai-collaborative.html
References
  1. Association of American Medical Colleges. ERAS® Statistics. Accessed January 16, 2024. https://www.aamc.org/data-reports/data/eras-statistics-data
  2. National Resident Matching Program, Data Release and ResearchCommittee: Results of the 2022 NRMP Program Director Survey. Accessed January 18, 2024. https://www.nrmp.org/wp-content/uploads/2022/09/PD-Survey-Report-2022_FINALrev.pdf
  3. Isaq NA, Bowers S, Chen ST. Taking a “step” toward diversity in dermatology: de-emphasizing USMLE Step 1 scores in residency applications. Int J Womens Dermatol. 2020;6:209-210. doi:10.1016/j.ijwd.2020.02.008
  4. Association of American Medical Colleges. Holistic review in medical school admissions. Accessed January 16, 2024. https://students-residents.aamc.org/choosing-medical-career/holistic-review-medical-school-admissions
  5. Association of American Medical Colleges. The MyERAS® application and program signaling for 2023-24. Accessed January 16, 2024. https://students-residents.aamc.org/applying-residencies-eras/myeras-application-and-program-signaling-2023-24
  6. Tavarez MM, Baghdassarian A, Bailey J, et al. A call to action for standardizing letters of recommendation. J Grad Med Educ. 2022;14:642-646. doi:10.4300/JGME-D-22-00131.1
  7. US Department of State. Artificial intelligence (AI). Accessed January 16, 2024. https://www.state.gov/artificial-intelligence/
  8. Stanford University Human-Centered Artificial Intelligence. Artificial intelligence definitions. Accessed January 16, 2024.https://hai.stanford.edu/sites/default/files/2023-03/AI-Key-Terms-Glossary-Definition.pdf
  9. Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. 2019;380:1347-1358. doi:10.1056/NEJMra1814259
  10. Burk-Rafel J, Reinstein I, Feng J, et al. Development and validation of a machine learning-based decision support tool for residency applicant screening and review. Acad Med. 2021;96(11S):S54-S61. doi:10.1097/ACM.0000000000004317
  11. Rees CA, Ryder HF. Machine learning for the prediction of ranked applicants and matriculants to an internal medicine residency program. Teach Learn Med. 2023;35:277-286. doi:10.1080/10401334.2022.2059664
  12. Drum B, Shi J, Peterson B, et al. Using natural language processing and machine learning to identify internal medicine-pediatrics residency values in applications. Acad Med. 2023;98:1278-1282. doi:10.1097/ACM.0000000000005352
  13. Knapke JM, Mount HR, McCabe E, et al. Early identification of family physicians using qualitative admissions data. Fam Med. 2023;55:245-252. doi:10.22454/FamMed.2023.596964
  14. Stephens CQ, Hamilton NA, Thompson AE, et al. Use of computerized interview scheduling program for pediatric surgery match applicants. J Pediatr Surg. 2017;52:1056-1059. doi:10.1016/j.jpedsurg.2017.03.033
  15. Nickles MA, Kulkarni V, Varghese JA, et al. Dermatology residency programs’ websites in the virtual era: a cross-sectional analysis. J Am Acad Dermatol. 2022;86:447-448. doi:10.1016/j.jaad.2021.09.064
  16. Yi PK, Ray ND, Segall N. A novel use of an artificially intelligent Chatbot and a live, synchronous virtual question-and answer session for fellowship recruitment. BMC Med Educ. 2023;23:152. doi:10.1186/s12909-022-03872-z
  17. Introducing “Learn More with MARai”—the key to understanding your target residency programs. Match A Resident website. Published September 23, 2023. Accessed January 16, 2024. https://blog.matcharesident.com/ai-powered-residency-insights/
  18. Johnstone RE, Neely G, Sizemore DC. Artificial intelligence softwarecan generate residency application personal statements that program directors find acceptable and difficult to distinguish from applicant compositions. J Clin Anesth. 2023;89:111185. doi:10.1016/j.jclinane.2023.111185
  19. Khalid N, Qayyum A, Bilal M, et al. Privacy-preserving artificial intelligence in healthcare: techniques and applications. Comput Biol Med. 2023;158:106848. doi:10.1016/j.compbiomed.2023.106848
  20. Birt J. How to optimize your resume for AI scanners (with tips). Indeed website. Updated December 30, 2022. Accessed January 16, 2024. https://www.indeed.com/career-advice/resumes-cover-letters/resume-ai
  21. Patel V, Deleonibus A, Wells MW, et al. Distinguishing authentic voices in the age of ChatGPT: comparing AI-generated and applicant-written personal statements for plastic surgery residency application. Ann Plast Surg. 2023;91:324-325. doi:10.1097/SAP.0000000000003653
  22. Woodfin MW. The personal statement in the age of artificial intelligence. Acad Med. 2023;98:869. doi:10.1097/ACM.0000000000005266
  23. Diaz N. UC Davis Health to lead new gen AI collaborative. Beckers Healthcare website. Published October 10, 2023. AccessedJanuary 16, 2024. https://www.beckershospitalreview.com/digital-health/uc-davis-health-to-lead-new-gen-ai-collaborative.html
Issue
Cutis - 113(2)
Issue
Cutis - 113(2)
Page Number
56-59
Page Number
56-59
Publications
Publications
Topics
Article Type
Display Headline
The Potential for Artificial Intelligence Tools in Residency Recruitment
Display Headline
The Potential for Artificial Intelligence Tools in Residency Recruitment
Sections
Inside the Article

Practice Points

  • Artificial intelligence solutions may increase the efficiency of the holistic review process and enhance the opportunity to find applicants who may have been overlooked by a traditional review process.
  • Artificial intelligence support also may be utilized by applicants to aid in discovering training programs that fit their interests, practice interview strategies, and refine their written application.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media