User login
A model that crunches data from routine blood tests can accurately identify cancer patients who will develop acute kidney injury (AKI) up to a month before it happens, according to a cohort study.
The algorithm spotted nearly 74% of the patients who went on to develop AKI within 30 days, providing a window for intervention and possibly prevention, according to investigators.
These results were reported at the AACR Virtual Special Conference: Artificial Intelligence, Diagnosis, and Imaging (abstract PR-11).
“Cancer patients are a high-risk population for AKI due to the nature of their treatment and illness,” said presenter Lauren A. Scanlon, PhD, a data scientist at The Christie NHS Foundation Trust in Huddersfield, England. “AKI causes a huge disruption in treatment and distress for the patient, so it would be amazing if we could, say, predict the AKI before it occurs and prevent it from even happening.”
U.K. health care providers are already using an algorithm to monitor patients’ creatinine levels, comparing new values against historic ones, Dr. Scanlon explained. When that algorithm detects AKI, it issues an alert that triggers implementation of an AKI care bundle, including measures such as fluid monitoring and medication review, within 24 hours.
Taking this concept further, Dr. Scanlon and colleagues developed a random forest model, a type of machine learning algorithm, that incorporates other markers from blood tests routinely obtained for all patients, with the aim of predicting AKI up to 30 days in advance.
“Using routinely collected blood test results will ensure that the model is applicable to all our patients and can be implemented in an automated manner,” Dr. Scanlon noted.
The investigators developed and trained the model using 597,403 blood test results from 48,865 patients undergoing cancer treatment between January 2017 and May 2020.
The model assigns patients to five categories of risk for AKI in the next 30 days: very low, low, medium, high, and very high.
“We wanted the model to output in this way so that it could be used by clinicians alongside their own insight and knowledge on a case-by-case basis,” Dr. Scanlon explained.
The investigators then prospectively validated the model and its risk categories in another 9,913 patients who underwent cancer treatment between June and August 2020.
Using a model threshold of medium risk or higher, the model correctly predicted AKI in 330 (73.8%) of the 447 patients in the validation cohort who ultimately developed AKI.
“This is pretty amazing and shows that this model really is working and can correctly detect these AKIs up to 30 days before they occur, giving a huge window to put in place preventive strategies,” Dr. Scanlon said.
Among the 154 patients in whom the model incorrectly predicted AKI, 9 patients had only a single follow-up blood test and 17 patients did not have any, leaving their actual outcomes unclear.
“Given that AKI detection uses blood tests, an AKI in these patients was never confirmed,” Dr. Scanlon noted. “So this could give a potential benefit of the model that we never intended: It could reduce undiagnosed AKI by flagging those who are at risk.”
“Our next steps are to test the model through a technology clinical trial to see if putting intervention strategies in place does prevent these AKIs from taking place,” Dr. Scanlon concluded. “We are also going to move to ongoing monitoring of the model performance.”
Dr. Scanlon disclosed no conflicts of interest. The study did not receive specific funding.
A model that crunches data from routine blood tests can accurately identify cancer patients who will develop acute kidney injury (AKI) up to a month before it happens, according to a cohort study.
The algorithm spotted nearly 74% of the patients who went on to develop AKI within 30 days, providing a window for intervention and possibly prevention, according to investigators.
These results were reported at the AACR Virtual Special Conference: Artificial Intelligence, Diagnosis, and Imaging (abstract PR-11).
“Cancer patients are a high-risk population for AKI due to the nature of their treatment and illness,” said presenter Lauren A. Scanlon, PhD, a data scientist at The Christie NHS Foundation Trust in Huddersfield, England. “AKI causes a huge disruption in treatment and distress for the patient, so it would be amazing if we could, say, predict the AKI before it occurs and prevent it from even happening.”
U.K. health care providers are already using an algorithm to monitor patients’ creatinine levels, comparing new values against historic ones, Dr. Scanlon explained. When that algorithm detects AKI, it issues an alert that triggers implementation of an AKI care bundle, including measures such as fluid monitoring and medication review, within 24 hours.
Taking this concept further, Dr. Scanlon and colleagues developed a random forest model, a type of machine learning algorithm, that incorporates other markers from blood tests routinely obtained for all patients, with the aim of predicting AKI up to 30 days in advance.
“Using routinely collected blood test results will ensure that the model is applicable to all our patients and can be implemented in an automated manner,” Dr. Scanlon noted.
The investigators developed and trained the model using 597,403 blood test results from 48,865 patients undergoing cancer treatment between January 2017 and May 2020.
The model assigns patients to five categories of risk for AKI in the next 30 days: very low, low, medium, high, and very high.
“We wanted the model to output in this way so that it could be used by clinicians alongside their own insight and knowledge on a case-by-case basis,” Dr. Scanlon explained.
The investigators then prospectively validated the model and its risk categories in another 9,913 patients who underwent cancer treatment between June and August 2020.
Using a model threshold of medium risk or higher, the model correctly predicted AKI in 330 (73.8%) of the 447 patients in the validation cohort who ultimately developed AKI.
“This is pretty amazing and shows that this model really is working and can correctly detect these AKIs up to 30 days before they occur, giving a huge window to put in place preventive strategies,” Dr. Scanlon said.
Among the 154 patients in whom the model incorrectly predicted AKI, 9 patients had only a single follow-up blood test and 17 patients did not have any, leaving their actual outcomes unclear.
“Given that AKI detection uses blood tests, an AKI in these patients was never confirmed,” Dr. Scanlon noted. “So this could give a potential benefit of the model that we never intended: It could reduce undiagnosed AKI by flagging those who are at risk.”
“Our next steps are to test the model through a technology clinical trial to see if putting intervention strategies in place does prevent these AKIs from taking place,” Dr. Scanlon concluded. “We are also going to move to ongoing monitoring of the model performance.”
Dr. Scanlon disclosed no conflicts of interest. The study did not receive specific funding.
A model that crunches data from routine blood tests can accurately identify cancer patients who will develop acute kidney injury (AKI) up to a month before it happens, according to a cohort study.
The algorithm spotted nearly 74% of the patients who went on to develop AKI within 30 days, providing a window for intervention and possibly prevention, according to investigators.
These results were reported at the AACR Virtual Special Conference: Artificial Intelligence, Diagnosis, and Imaging (abstract PR-11).
“Cancer patients are a high-risk population for AKI due to the nature of their treatment and illness,” said presenter Lauren A. Scanlon, PhD, a data scientist at The Christie NHS Foundation Trust in Huddersfield, England. “AKI causes a huge disruption in treatment and distress for the patient, so it would be amazing if we could, say, predict the AKI before it occurs and prevent it from even happening.”
U.K. health care providers are already using an algorithm to monitor patients’ creatinine levels, comparing new values against historic ones, Dr. Scanlon explained. When that algorithm detects AKI, it issues an alert that triggers implementation of an AKI care bundle, including measures such as fluid monitoring and medication review, within 24 hours.
Taking this concept further, Dr. Scanlon and colleagues developed a random forest model, a type of machine learning algorithm, that incorporates other markers from blood tests routinely obtained for all patients, with the aim of predicting AKI up to 30 days in advance.
“Using routinely collected blood test results will ensure that the model is applicable to all our patients and can be implemented in an automated manner,” Dr. Scanlon noted.
The investigators developed and trained the model using 597,403 blood test results from 48,865 patients undergoing cancer treatment between January 2017 and May 2020.
The model assigns patients to five categories of risk for AKI in the next 30 days: very low, low, medium, high, and very high.
“We wanted the model to output in this way so that it could be used by clinicians alongside their own insight and knowledge on a case-by-case basis,” Dr. Scanlon explained.
The investigators then prospectively validated the model and its risk categories in another 9,913 patients who underwent cancer treatment between June and August 2020.
Using a model threshold of medium risk or higher, the model correctly predicted AKI in 330 (73.8%) of the 447 patients in the validation cohort who ultimately developed AKI.
“This is pretty amazing and shows that this model really is working and can correctly detect these AKIs up to 30 days before they occur, giving a huge window to put in place preventive strategies,” Dr. Scanlon said.
Among the 154 patients in whom the model incorrectly predicted AKI, 9 patients had only a single follow-up blood test and 17 patients did not have any, leaving their actual outcomes unclear.
“Given that AKI detection uses blood tests, an AKI in these patients was never confirmed,” Dr. Scanlon noted. “So this could give a potential benefit of the model that we never intended: It could reduce undiagnosed AKI by flagging those who are at risk.”
“Our next steps are to test the model through a technology clinical trial to see if putting intervention strategies in place does prevent these AKIs from taking place,” Dr. Scanlon concluded. “We are also going to move to ongoing monitoring of the model performance.”
Dr. Scanlon disclosed no conflicts of interest. The study did not receive specific funding.
FROM AACR: AI, DIAGNOSIS, AND IMAGING 2021