The EU grapples with the ethics of AI in healthcare
Photo: alvarez/Getty Images
AI was deployed across multiple areas in health during the pandemic, from analysing the sound of a patient’s cough to predicting mortality. More than 4,000 scientific papers have been published on AI and COVID-19 since the pandemic began, Alessandro Blasimme, a senior scientist in the Health Policy Lab at the Swiss Federal Institute of Technology in Zurich, told participants at a recent panel on the future of science and technology in Europe.
But policy-makers are grappling with the raft of ethical challenges this rapid rate of innovation presents.
For Blasimme, it raises key moral questions that cannot be left to developers. “I’m not saying they [AI technologies] are not precise; they might be very accurate,” he told Healthcare IT News. “It’s just that accuracy is one side of this coin.”
Delegating decision-making to an algorithm?
The ethical quandaries posed by AI in healthcare range from opaque decision-making to biases against certain social groups that get embedded in a technology. For instance, if an algorithm drew on the fact that older people were more likely to die from COVID, it could introduce age-based bias to decisions.
This matters because the rationing of scarce resources will not be unique to the pandemic. “In general, outside the context of an emergency, medicine faces rationing issues all the time,” Blasimme points out.
AI also featured in public health efforts on the Greek border, where officials used an algorithm to decide which entrants had to undergo additional COVID testing—a strategy hailed as revitalising Greek tourism. The technology could be repurposed for surveillance, Blasimme says. And, while the public have accepted border restrictions, “what people might not be used to is, the idea that there is a system that does this screening in the background – something that is not visible, it’s not transparent.”
Nor will allocating hard decisions to AI reduce the moral burden on healthcare professionals. “I don’t think the moral fatigue that comes with making these very hard decisions about allocating your last intensive care bed—that a doctor would feel better if the decision was delegated to an algorithm.”
A global leader in AI policy
To exploit the potential of AI in healthcare, Europe must put in place effective oversight, and communicate with healthcare providers and citizens to garner trust, argued a policy brief published earlier this year by the European Policy Centre (EPC). In addition, the companies developing these technologies need more guidance on how regulatory standards will evolve, Blasimme suggests. Although various pieces of legislation are being prepared, there are “grey areas,” with health AI falling between regulation for medical devices and the upcoming AI Act.
The AI Act was proposed in 2021 and is expected to pass in 2024. Viewed by the EU as a flagship policy, it will be the first international regulation for AI, according to Andrea G Rodríguez a, lead digital policy analyst at the EPC. Regulating a technology like AI is particularly challenging, Rodríguez says. “We don’t know what the status of AI will be in 2024 or 2025. We don’t know to what extent what we’re trying to regulate now will be applicable to the future.”
In the meantime, AI health technologies are entering the market. Blasimme cautions: “We’d better establish the rules as soon as possible before it’s too late.”