Philips CTO outlines ethical guidelines for AI in healthcare
The use of artificial intelligence and machine learning algorithms in healthcare is poised to expand significantly over the next few years, but beyond the investment strategies and technological foundations lie serious questions around the ethical and responsible use of AI.
In an effort to clarify its own position and add to the debate, the executive vice president and chief technology officer for Royal Philips, Henk van Houten, has published a list of five guiding principles for the design and responsible use of AI in healthcare and personal health applications.
The five principles – well-being, oversight, robustness, fairness, and transparency – all stem from the basic viewpoint that AI-enabled solutions should complement and benefit customers, patients, and society as a whole.
First and foremost, well-being should be front of mind when developing healthcare AI solutions, van Houten argues, helping to alleviate overstretched healthcare systems, but more importantly to act as a means of supplying proactive care, informing and supporting healthy living over the course of a person's entire life.
When it comes to oversight, van Houten called for proper validation and interpretation of AI-generated insights through the participation and collaboration of AI engineers, data scientists, and clinical experts.
A robust set of control mechanisms is seen as necessary not only to build trust in AI among patients and clinicians, but also to prevent unintended or deliberate misuse.
"Training and education can also go a long way in safeguarding the proper use of AI," van Houten wrote. "It's vital that every user has a keen understanding of the strengths and limitations of a specific AI-enabled solution."
The fourth principle, fairness, argues for ensuring bias and discrimination is avoided in AI-based solutions--an issue currently being discussed in the United States.
In December, Senators Cory Booker, D-New Jersey, and Ron Wyden, D-Oregon, urged the Trump administration and some of the nation's biggest health insurers – Humana and Blue Cross among them – to be aware of potential racial bias in healthcare data algorithms.
Van Houten also takes the perspective that development and validation of AI must be based on data that accurately represent the diversity of people in the target group, and when AI is applied to a different target group, it should be first be revalidated or possibly retrained.
Finally, Philips sees development of AI-enabled solutions in partnership – between providers, payers, patients, researchers, and regulators – as a way of ensuring optimal transparency.
"Taken together, I believe these five principles can help pave the way towards responsible use of AI in healthcare and personal health applications," van Houten's blog post concluded.
Philips already offers more than a dozen AI-enabled solutions, such as VitalEye, which allows for integrated automated breathing detection for a broad range of patient sizes without accessories or manual adjustments, and IntelliSpace Portal is a comprehensive, advanced data integration, visualization, and analysis platform to enhance diagnostic confidence.
Nathan Eddy is a healthcare and technology freelancer based in Berlin.
Email the writer: nathaneddy@gmail.com
Twitter: @dropdeaded209