Article Type
Changed
Mon, 06/26/2023 - 16:05

Omitting race and ethnicity from colorectal cancer (CRC) recurrence risk prediction models could decrease their accuracy and fairness, particularly for minority groups, potentially leading to inappropriate care advice and contributing to existing health disparities, new research suggests.

“Our study has important implications for developing clinical algorithms that are both accurate and fair,” write first author Sara Khor, MASc, with University of Washington, Seattle, and colleagues.

“Many groups have called for the removal of race in clinical algorithms,” Dr. Khor said in an interview. “We wanted to better understand, using CRC recurrence as a case study, what some of the implications might be if we simply remove race as a predictor in a risk prediction algorithm.”

Their findings suggest that doing so could lead to higher racial bias in model accuracy and less accurate estimation of risk for racial and ethnic minority groups. This could lead to inadequate or inappropriate surveillance and follow-up care more often in patients of minoritized racial and ethnic groups.

The study was published online in JAMA Network Open.
 

Lack of data and consensus

There is currently a lack of consensus on whether and how race and ethnicity should be included in clinical risk prediction models used to guide health care decisions, the authors note.

The inclusion of race and ethnicity in clinical risk prediction algorithms has come under increased scrutiny, because of concerns over the potential for racial profiling and biased treatment. On the other hand, some argue that excluding race and ethnicity could harm all groups by reducing predictive accuracy and would especially disadvantage minority groups.

It remains unclear whether simply omitting race and ethnicity from algorithms will ultimately improve care decisions for patients of minoritized racial and ethnic groups.

Dr. Khor and colleagues investigated the performance of four risk prediction models for CRC recurrence using data from 4,230 patients with CRC (53% non-Hispanic white; 22% Hispanic; 13% Black or African American; and 12% Asian, Hawaiian, or Pacific Islander).

The four models were:

  • A race-neutral model that explicitly excluded race and ethnicity as a predictor.
  • A race-sensitive model that included race and ethnicity.
  • A model with two-way interactions between clinical predictors and race and ethnicity.
  • Separate models stratified by race and ethnicity.

They found that the race-neutral model had poorer performance (worse calibration, negative predictive value, and false-negative rates) among racial and ethnic minority subgroups, compared with the non-Hispanic subgroup. The false-negative rate for Hispanic patients was 12% vs. 3% for non-Hispanic white patients.

Conversely, including race and ethnicity as a predictor of postoperative cancer recurrence improved the model’s accuracy and increased “algorithmic fairness” in terms of calibration slope, discriminative ability, positive predictive value, and false-negative rates. The false-negative rate for Hispanic patients was 9% and 8% for non-Hispanic white patients.

The inclusion of race interaction terms or using race-stratified models did not improve model fairness, likely due to small sample sizes in subgroups, the authors add.
 

 

 

‘No one-size-fits-all answer’

“There is no one-size-fits-all answer to whether race/ethnicity should be included, because the health disparity consequences that can result from each clinical decision are different,” Dr. Khor told this news organization.

“The downstream harms and benefits of including or excluding race will need to be carefully considered in each case,” Dr. Khor said.

“When developing a clinical risk prediction algorithm, one should consider the potential racial/ethnic biases present in clinical practice, which translate to bias in the data,” Dr. Khor added. “Care must be taken to think through the implications of such biases during the algorithm development and evaluation process in order to avoid further propagating those biases.”

The coauthors of a linked commentary say this study “highlights current challenges in measuring and addressing algorithmic bias, with implications for both patient care and health policy decision-making.”

Ankur Pandya, PhD, with Harvard School of Public Health, Boston, and Jinyi Zhu, PhD, with Vanderbilt University, Nashville, Tenn., agree that there is no “one-size-fits-all solution” – such as always excluding race and ethnicity from risk models – to confronting algorithmic bias.

“When possible, approaches for identifying and responding to algorithmic bias should focus on the decisions made by patients and policymakers as they relate to the ultimate outcomes of interest (such as length of life, quality of life, and costs) and the distribution of these outcomes across the subgroups that define important health disparities,” Dr. Pandya and Dr. Zhu suggest.

“What is most promising,” they write, is the high level of engagement from researchers, philosophers, policymakers, physicians and other healthcare professionals, caregivers, and patients to this cause in recent years, “suggesting that algorithmic bias will not be left unchecked as access to unprecedented amounts of data and methods continues to increase moving forward.”

This research was supported by a grant from the National Cancer Institute of the National Institutes of Health. The authors and editorial writers have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Omitting race and ethnicity from colorectal cancer (CRC) recurrence risk prediction models could decrease their accuracy and fairness, particularly for minority groups, potentially leading to inappropriate care advice and contributing to existing health disparities, new research suggests.

“Our study has important implications for developing clinical algorithms that are both accurate and fair,” write first author Sara Khor, MASc, with University of Washington, Seattle, and colleagues.

“Many groups have called for the removal of race in clinical algorithms,” Dr. Khor said in an interview. “We wanted to better understand, using CRC recurrence as a case study, what some of the implications might be if we simply remove race as a predictor in a risk prediction algorithm.”

Their findings suggest that doing so could lead to higher racial bias in model accuracy and less accurate estimation of risk for racial and ethnic minority groups. This could lead to inadequate or inappropriate surveillance and follow-up care more often in patients of minoritized racial and ethnic groups.

The study was published online in JAMA Network Open.
 

Lack of data and consensus

There is currently a lack of consensus on whether and how race and ethnicity should be included in clinical risk prediction models used to guide health care decisions, the authors note.

The inclusion of race and ethnicity in clinical risk prediction algorithms has come under increased scrutiny, because of concerns over the potential for racial profiling and biased treatment. On the other hand, some argue that excluding race and ethnicity could harm all groups by reducing predictive accuracy and would especially disadvantage minority groups.

It remains unclear whether simply omitting race and ethnicity from algorithms will ultimately improve care decisions for patients of minoritized racial and ethnic groups.

Dr. Khor and colleagues investigated the performance of four risk prediction models for CRC recurrence using data from 4,230 patients with CRC (53% non-Hispanic white; 22% Hispanic; 13% Black or African American; and 12% Asian, Hawaiian, or Pacific Islander).

The four models were:

  • A race-neutral model that explicitly excluded race and ethnicity as a predictor.
  • A race-sensitive model that included race and ethnicity.
  • A model with two-way interactions between clinical predictors and race and ethnicity.
  • Separate models stratified by race and ethnicity.

They found that the race-neutral model had poorer performance (worse calibration, negative predictive value, and false-negative rates) among racial and ethnic minority subgroups, compared with the non-Hispanic subgroup. The false-negative rate for Hispanic patients was 12% vs. 3% for non-Hispanic white patients.

Conversely, including race and ethnicity as a predictor of postoperative cancer recurrence improved the model’s accuracy and increased “algorithmic fairness” in terms of calibration slope, discriminative ability, positive predictive value, and false-negative rates. The false-negative rate for Hispanic patients was 9% and 8% for non-Hispanic white patients.

The inclusion of race interaction terms or using race-stratified models did not improve model fairness, likely due to small sample sizes in subgroups, the authors add.
 

 

 

‘No one-size-fits-all answer’

“There is no one-size-fits-all answer to whether race/ethnicity should be included, because the health disparity consequences that can result from each clinical decision are different,” Dr. Khor told this news organization.

“The downstream harms and benefits of including or excluding race will need to be carefully considered in each case,” Dr. Khor said.

“When developing a clinical risk prediction algorithm, one should consider the potential racial/ethnic biases present in clinical practice, which translate to bias in the data,” Dr. Khor added. “Care must be taken to think through the implications of such biases during the algorithm development and evaluation process in order to avoid further propagating those biases.”

The coauthors of a linked commentary say this study “highlights current challenges in measuring and addressing algorithmic bias, with implications for both patient care and health policy decision-making.”

Ankur Pandya, PhD, with Harvard School of Public Health, Boston, and Jinyi Zhu, PhD, with Vanderbilt University, Nashville, Tenn., agree that there is no “one-size-fits-all solution” – such as always excluding race and ethnicity from risk models – to confronting algorithmic bias.

“When possible, approaches for identifying and responding to algorithmic bias should focus on the decisions made by patients and policymakers as they relate to the ultimate outcomes of interest (such as length of life, quality of life, and costs) and the distribution of these outcomes across the subgroups that define important health disparities,” Dr. Pandya and Dr. Zhu suggest.

“What is most promising,” they write, is the high level of engagement from researchers, philosophers, policymakers, physicians and other healthcare professionals, caregivers, and patients to this cause in recent years, “suggesting that algorithmic bias will not be left unchecked as access to unprecedented amounts of data and methods continues to increase moving forward.”

This research was supported by a grant from the National Cancer Institute of the National Institutes of Health. The authors and editorial writers have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Omitting race and ethnicity from colorectal cancer (CRC) recurrence risk prediction models could decrease their accuracy and fairness, particularly for minority groups, potentially leading to inappropriate care advice and contributing to existing health disparities, new research suggests.

“Our study has important implications for developing clinical algorithms that are both accurate and fair,” write first author Sara Khor, MASc, with University of Washington, Seattle, and colleagues.

“Many groups have called for the removal of race in clinical algorithms,” Dr. Khor said in an interview. “We wanted to better understand, using CRC recurrence as a case study, what some of the implications might be if we simply remove race as a predictor in a risk prediction algorithm.”

Their findings suggest that doing so could lead to higher racial bias in model accuracy and less accurate estimation of risk for racial and ethnic minority groups. This could lead to inadequate or inappropriate surveillance and follow-up care more often in patients of minoritized racial and ethnic groups.

The study was published online in JAMA Network Open.
 

Lack of data and consensus

There is currently a lack of consensus on whether and how race and ethnicity should be included in clinical risk prediction models used to guide health care decisions, the authors note.

The inclusion of race and ethnicity in clinical risk prediction algorithms has come under increased scrutiny, because of concerns over the potential for racial profiling and biased treatment. On the other hand, some argue that excluding race and ethnicity could harm all groups by reducing predictive accuracy and would especially disadvantage minority groups.

It remains unclear whether simply omitting race and ethnicity from algorithms will ultimately improve care decisions for patients of minoritized racial and ethnic groups.

Dr. Khor and colleagues investigated the performance of four risk prediction models for CRC recurrence using data from 4,230 patients with CRC (53% non-Hispanic white; 22% Hispanic; 13% Black or African American; and 12% Asian, Hawaiian, or Pacific Islander).

The four models were:

  • A race-neutral model that explicitly excluded race and ethnicity as a predictor.
  • A race-sensitive model that included race and ethnicity.
  • A model with two-way interactions between clinical predictors and race and ethnicity.
  • Separate models stratified by race and ethnicity.

They found that the race-neutral model had poorer performance (worse calibration, negative predictive value, and false-negative rates) among racial and ethnic minority subgroups, compared with the non-Hispanic subgroup. The false-negative rate for Hispanic patients was 12% vs. 3% for non-Hispanic white patients.

Conversely, including race and ethnicity as a predictor of postoperative cancer recurrence improved the model’s accuracy and increased “algorithmic fairness” in terms of calibration slope, discriminative ability, positive predictive value, and false-negative rates. The false-negative rate for Hispanic patients was 9% and 8% for non-Hispanic white patients.

The inclusion of race interaction terms or using race-stratified models did not improve model fairness, likely due to small sample sizes in subgroups, the authors add.
 

 

 

‘No one-size-fits-all answer’

“There is no one-size-fits-all answer to whether race/ethnicity should be included, because the health disparity consequences that can result from each clinical decision are different,” Dr. Khor told this news organization.

“The downstream harms and benefits of including or excluding race will need to be carefully considered in each case,” Dr. Khor said.

“When developing a clinical risk prediction algorithm, one should consider the potential racial/ethnic biases present in clinical practice, which translate to bias in the data,” Dr. Khor added. “Care must be taken to think through the implications of such biases during the algorithm development and evaluation process in order to avoid further propagating those biases.”

The coauthors of a linked commentary say this study “highlights current challenges in measuring and addressing algorithmic bias, with implications for both patient care and health policy decision-making.”

Ankur Pandya, PhD, with Harvard School of Public Health, Boston, and Jinyi Zhu, PhD, with Vanderbilt University, Nashville, Tenn., agree that there is no “one-size-fits-all solution” – such as always excluding race and ethnicity from risk models – to confronting algorithmic bias.

“When possible, approaches for identifying and responding to algorithmic bias should focus on the decisions made by patients and policymakers as they relate to the ultimate outcomes of interest (such as length of life, quality of life, and costs) and the distribution of these outcomes across the subgroups that define important health disparities,” Dr. Pandya and Dr. Zhu suggest.

“What is most promising,” they write, is the high level of engagement from researchers, philosophers, policymakers, physicians and other healthcare professionals, caregivers, and patients to this cause in recent years, “suggesting that algorithmic bias will not be left unchecked as access to unprecedented amounts of data and methods continues to increase moving forward.”

This research was supported by a grant from the National Cancer Institute of the National Institutes of Health. The authors and editorial writers have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article