AI bias may worsen COVID-19 health disparities for people of color
Developers and data scientists have long said that biased data often leads to biased models when it comes to machine learning and artificial intelligence.
A new article in the Journal of the American Medical Informatics Association argues that such biased models may further the disproportionate impact the COVID-19 pandemic is having on people of color.
The article's authors, Eliane Röösli, of the Swiss Federal Institute of Technology, and Brian Rice and Tina Hernandez-Boussard, of Stanford University, noted that even as the global research community has rushed to push out new findings, it risks producing biased prediction models.
"If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden," wrote the researchers.
WHY IT MATTERS
The COVID-19 pandemic has had a hugely outsized impact on people of color, worsened by existing disparities in healthcare and systemic racism.
At the same time, researchers noted, COVID-19 prediction models can present serious shortcomings, especially regarding potential bias.
In a recent systematic review of COVID-19 prediction models, they wrote, "The most frequent problems encountered were unrepresentative data samples, high likelihood of model overfitting, and imprecise reporting of study populations and intended model use."
Researchers flagged the danger in regarding AI as intrinsically objective, particularly when building models for optimal allocation of resources, including ventilators and intensive care unit beds.
"These tools are built from biased data reflecting biased healthcare systems and are thus themselves also at high risk of bias – even if explicitly excluding sensitive attributes such as race or gender," they wrote.
For example, they argued, models that include comorbidities associated with COVID-19 could reinforce the structural biases that lead to some groups experiencing those comorbidities.
"Resource allocation models must ... go beyond their basic utilitarian foundation to avoid further harming minority groups already suffering the most from COVID-19, based on health inequalities rooted in prior systemic discrimination," they wrote.
To manage these challenges, the researchers suggested implementing transparency frameworks and reporting standards, including publicizing the source code of AI models.
They also encouraged regulatory infrastructure that prioritizes broad-based data sharing, noting that models developed at academic healthcare systems may not represent the general U.S. population.
"COVID-related data are being produced at incredible speed but these data remain siloed within each country or academic institute, in part due to a lack of interoperability and in part due to a lack of appropriate incentives," they wrote.
THE LARGER TREND
Stakeholders have presented the prevention of algorithmic bias as an integral part of ethical AI model development.
In early 2019, a Duke-Margolis Center for Health Policy paper released a report including bias mitigation in AI as a priority for developers, regulators, clinicians and policymakers, among others. Health systems will need to develop best practices, they wrote, that can address any bias introduced by the training data.
Later that year, U.S. Sens. Cory Booker, D-N.J., and Ron Wyden, D-Ore., urged the Trump administration and major insurers to combat racial bias in healthcare data algorithms.
"In healthcare, there is great promise in using algorithms to sort patients and target care to those most in need. However, these systems are not immune to the problem of bias," said the senators.
ON THE RECORD
"There is hope that AI can help guide treatment decisions, including the allocation of scarce resources within this crisis. However, the hasty adoption of AI tools bears great risk due to biased, unrepresentative training data and a lack of a regulated COVID-19 data resource for validation purposes," wrote researchers in the JAMIA article.
"Given the pervasiveness of biases, a failure to proactively develop comprehensive mitigation strategies during the COVID-19 pandemic risks exacerbating existing health disparities and hindering the adoption of AI tools capable of improving patient outcomes," they wrote.
Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Healthcare IT News is a HIMSS Media publication.