Conference Coverage

AI model predicts ovarian cancer responses


 

FROM SGO 2022

An artificial intelligence (AI) model successfully predicted which high-grade serous ovarian cancer patients would have excellent responses to laparoscopic surgery. The model, using still-frame images from pretreatment laparoscopic surgical videos, had an overall accuracy rate of 93%, according to the pilot study’s first author, Deanna Glassman, MD, an oncologic fellow at the University of Texas MD Anderson Cancer Center, Houston.

Dr. Glassman described her research in a presentation given at the annual meeting of the Society of Gynecologic Oncology.

While the AI model successfully identified all excellent-response patients, it did classify about a third of patients with poor responses as excellent responses. The smaller number of images in the poor-response category, Dr. Glassman speculated, may explain the misclassification.

Researchers took 435 representative still-frame images from pretreatment laparoscopic surgical videos of 113 patients with pathologically proven high-grade serous ovarian cancer. Using 70% of the images to train the model, they used 10% for validation and 20% for the actual testing. They developed the AI model with images from four anatomical locations (diaphragm, omentum, peritoneum, and pelvis), training it using deep learning and neural networks to extract morphological disease patterns for correlation with either of two outcomes: excellent response or poor response. An excellent response was defined as progression-free survival of 12 months or more, and poor response as PFS of 6 months or less. In the retrospective study of images, after excluding 32 gray-zone patients, 75 patients (66%) had durable responses to therapy and 6 (5%) had poor responses.

The PFS was 19 months in the excellent-response group and 3 months in the poor-response group.

Clinicians have often observed differences in gross morphology within the single histologic diagnosis of high-grade serous ovarian cancer. The research intent was to determine if AI could detect these distinct morphological patterns in the still frame images taken at the time of laparoscopy, and correlate them with the eventual clinical outcomes. Dr. Glassman and colleagues are currently validating the model with a much larger cohort, and will look into clinical testing.

“The big-picture goal,” Dr. Glassman said in an interview, “would be to utilize the model to predict which patients would do well with traditional standard of care treatments and those who wouldn’t do well so that we can personalize the treatment plan for those patients with alternative agents and therapies.”

Once validated, the model could also be employed to identify patterns of disease in other gynecologic cancers or distinguish between viable and necrosed malignant tissue.

The study’s predominant limitation was the small sample size which is being addressed in a larger ongoing study.

Funding was provided by a T32 grant, MD Anderson Cancer Center Support Grant, MD Anderson Ovarian Cancer Moon Shot, SPORE in Ovarian Cancer, the American Cancer Society, and the Ovarian Cancer Research Alliance. Dr. Glassman declared no relevant financial relationships.

Recommended Reading

Symptoms common in high-risk, early-stage ovarian cancer
MDedge Internal Medicine
Cervical cancer screening rates on the decline in the U.S.
MDedge Internal Medicine
Filling opioid prescriptions akin to a Sisyphean task
MDedge Internal Medicine
Ways to lessen toxic effects of chemo in older adults
MDedge Internal Medicine
Complex surgery 10 times more likely with some ovarian tumors
MDedge Internal Medicine
Symptoms, not pelvic exams, pick up most endometrial cancer recurrences
MDedge Internal Medicine
Few new cancer drugs replace current standards of care
MDedge Internal Medicine
Artificial sweeteners: A modifiable cancer risk?
MDedge Internal Medicine
Obesity increasing the risk for cancer: It’s complicated
MDedge Internal Medicine
Asking hard questions during office visits can improve patient outcomes
MDedge Internal Medicine