How one AI company works to reduce algorithmic bias
Photo: AiCure
Artificial intelligence developers need to be held accountable for any biases that arise in their algorithms.
Vendor AiCure is used by pharmaceutical companies to assess how patients take their medications during a clinical trial. Using AI and computer vision via a patient's smartphone, the vendor helps ensure patients get the support they need and that any incorrect or missed doses don't interfere with a trial's data.
In the company's beginnings around 2011, staff started to notice their facial recognition algorithm was not working properly with darker-skinned patients – because the open-source data set they were using to train their algorithm was largely built using fair-skinned people.
They rebuilt their algorithm by recruiting Black volunteers to contribute videos. Now, with more than one million dosing interactions recorded, AiCure's algorithms work with patients of all skin tones, which allows for unbiased visual and audio data capture.
Healthcare IT News sat down with Dr. Ed Ikeguchi, CEO of AiCure, to discuss biases in AI. He believes a similar checks-and-balances process is needed throughout the industry to understand if and when AI falls short in real-world scenarios. He says there now is a responsibility – both ethically and for the sake of good science – to thoroughly test algorithms, ensure their data sets are representative of the broader population, and establish a "recall" when they don't meet the needs of all populations.
Q. How can artificial intelligence developers be better held accountable for the biases that arise in algorithms?
A. Only recently has AI become more prominent in our society – from how we unlock our smartphones to evaluating our credit scores to supporting drug development and patient care. But, the same technology that holds great promise and influence also is the one that is less governed and can put underrepresented populations at a disadvantage. Especially when that disadvantage is related to one's health, there's an urgent need to create more accountability for how an algorithm performs in the real world.
AI is only as strong as the data it's fed, and lately that data backbone's credibility is being increasingly called into question. Today's AI developers lack access to large, diverse data sets on which to train and validate new tools.
They often need to leverage open-source data sets, but many of these were trained using computer programmer volunteers, which is a predominantly white population. Because algorithms are often trained on single-origin data samples with limited diversity, when applied in real-world scenarios to a broader population of different races, genders, ages and more, tech that appeared highly accurate in research may prove unreliable.
There needs to be an element of governance and peer review for all algorithms, as even the most solid and tested algorithm is bound to have unexpected results arise. An algorithm is never done learning – it must be constantly developed and fed more data to improve.
As an industry, we need to become more skeptical of AI's conclusions and encourage transparency in the industry. Companies should readily answer basic questions, such as 'How was the algorithm trained? On what basis did it draw this conclusion?'
We need to interrogate and constantly evaluate an algorithm under both common and rare scenarios with varied populations before it's introduced to real-world situations.
Q. You have said you believe there now is a responsibility to thoroughly test algorithms, ensure their data sets are representative of the broader population, and establish a "recall" when they don't meet the needs of all populations. Please elaborate.
A. While a new drug goes through years of clinical trial testing with thousands of patients, when it's given to millions of patients, there are bound to be side effects or new discoveries that could never have been hypothesized. Just as we have processes to recall and reassess drugs, there should be a similar process for AI when it leads to erroneous conclusions or doesn't work for certain skin colors in real-world scenarios.
As AI increasingly becomes a pivotal part of how we research drugs and care for patients, the stakes are too high to take shortcuts. There's a responsibility to contribute to good science by thoroughly testing algorithms, and establish a system of checks and balances if things go awry.
It's time we normalize going back to the drawing board when AI doesn't perform as planned outside of a controlled research environment, even if it's more often than you expected. Making healthcare a more inclusive industry starts with the technology our patients and pharmaceutical companies use.
Twitter: @SiwickiHealthIT
Email the writer: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.