Racial and ethnic bias pervasive in EHR notes, UChicago study shows

Using machine learning, researchers found that Black patients had higher odds of having at least one negative descriptor in their electronic health record history and physical notes.
By Kat Jercich
03:57 PM

A study published in Health Affairs this week used machine learning to investigate whether providers' negative patient descriptors in electronic health records varied by race or ethnicity.  

Compared with white patients, Black patients had more than 2.5 times the odds of having at least one negative descriptor in their histories and physical notes, according to the study, conducted by University of Chicago researchers.  

"This difference may indicate implicit racial bias, not only among individual providers but also among the broader beliefs and attitudes maintained by the healthcare system," the researchers wrote.  

"Our findings raise concerns about stigmatizing language in the EHR and its potential to exacerbate racial and ethnic healthcare disparities," they observed.  

WHY IT MATTERS  

By studying the data of 18,459 patients at a Chicago-based academic medical center, researchers in this study sought to examine the use of potentially stigmatizing language in the EHR.

As the study authors note, decades of evidence exists pointing to the unequal treatment patients of color face in the U.S. healthcare system.  

Studies have also pointed to evidence of the effect implicit bias has on patient-provider relationships.  

For this study, researchers developed a model to analyze the clinical notes dataset, training it to categorize sentences containing 15 descriptors, such as "(non-)compliant," "combative," "aggressive" and "unpleasant" as negative, positive or out of context.  

In models adjusted for socio-demographic and health characteristics, Black patients had 2.54 times the adjusted odds of having one or more negative descriptors in the EHR, compared with white patients.   

The likelihood was similar when patients with ICD-10 codes related to delirium, substance use or other mental and behavioral diagnoses were excluded from the sample.  

Medicaid users and unmarried patients were also more likely to have negative descriptors in the EHR. By contrast, notes written after March 1, 2020, and in an outpatient setting were less likely to have a negative descriptor.  

The researchers raised concerns about the enduring ramifications of such records although they also noted future inquiries in that regard are necessary.  

"We theorize that negative descriptors in a patient's EHR may assign negative intrinsic value to patients. Subsequent providers may read, be affected by and perpetuate the negative descriptors, reinforcing stigma to other healthcare teams," they wrote.  

"It is also plausible that if a provider with implicit biases were to document a patient encounter with stigmatizing language, the note may influence the perceptions and decisions of other members of the care team, irrespective of the other team members' biases or lack thereof," they continued.  

The researchers acknowledged several limitations with the study, including the potential for COVID-19 to affect clinicians' behavior.  

"We recognize that the use of negative descriptors might not necessarily reflect bias among individual providers; rather, it may reflect a broader systemic acceptability of using negative patient descriptors as a surrogate for identifying structural barriers," they added.  

THE LARGER TREND  

Researchers have sought to examine the role health IT can play in reproducing bias.  

For instance, a study from 2020 found that artificial intelligence can make COVID-19 health disparities worse for people of color.  

Additionally, experts have noted that even innocuous-seeming data may have unforeseen consequences.  

ON THE RECORD  

"The goal of addressing implicit bias is to address the underlying mechanisms that prompt the use of negative descriptors to describe patients," wrote researchers in the Health Affairs study.  

"This includes preventing the introduction of biased language by providers, preventing the perpetuation of biased language by members of the healthcare team, and increasing awareness of the effects of providers' language on the patient relationship.  

"Interventions may include provider bias training and addressing healthcare system factors that may predispose providers toward expressions of bias," they added.

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: kjercich@himss.org
Healthcare IT News is a HIMSS Media publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.