HITRUST unveils new tool for AI risk management

The assessment is designed to help healthcare organizations ensure they have the right governance in place for deploying artificial intelligence and machine learning models.
By Mike Miliard
09:58 AM

Photo: Pixabay/Pexels

HITRUST this week unveiled its new AI Risk Management Assessment, which it bills as a comprehensive assessment approach for mitigating the risks of artificial intelligence deployments, in healthcare and other organizations.

WHY IT MATTERS
The assessment is meant to help ensure that organizations have adequate governance in place for implementing AI tools, and that those guardrails can be effectively communicated by companies to management teams and boards of directors.

HITRUST says its approach aligns with standards issued by both NIST and ISO/IEC, and is supported by an assessment framework and SaaS platform to help adopters demonstrate that AI risk-management outcomes are met.

"The total effort to address risk management at scale can take weeks or months of labor just to design and maintain an assessment approach, socialize that approach and to prepare for the assessment work itself," added Bimal Sheth, EVP of standards development and assurance operations at HITRUST. "Even then, there can be questions about completeness and quality and the work can be exhausting where the organization wishes to align to multiple industry standards."

Designed for any organization using such tools – including machine learning algorithms and large language models for generative AI – the framework is designed to help healthcare and other leaders validate their approach to risk management for these fast-evolving technologies.

"The AI RM solution can be used as a self-assessment and benchmarking tool, or companies can engage one of over 100 HITRUST external assessor firms to validate and verify implementation," said Jeremy Huval, chief innovation officer at HITRUST, in a statement.

THE LARGER TREND
The new risk-management tool comes less than a year after HITRUST announced its AI Assurance Program in October 2023. That project seeks to offer an approach, inspired by the HITRUST Common Security Framework, to help healthcare organizations develop strategies for secure, sustainable and trustworthy AI models.

HITRUST says it also plans to release a new AI Security Certification Program – which will include AI-specific control specifications incorporated in the HITRUST CSF and enhancements to the company's assurance methodologies, systems and ecosystem – toward the end of this year.

Earlier this month, another organization, NIST, unveiled an open-source platform for AI safety assessments. The free tool, called Dioptra, aims to help developers understand and mitigate some of the unique data risks with AI and machine learning models.

ON THE RECORD
"Standards for AI risk management are evolving rapidly, and it is crucial for companies to address these principles with a thoughtful and comprehensive approach," said Robert Booker, chief strategy officer at HITRUST, in a statement announcing the AI Risk Management Assessment. "Governance of this important and powerful capability is vital to unlocking the potential that AI offers, and risk management is critical to implementing AI responsibly."

Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.

The HIMSS AI in Healthcare Forum is scheduled to take place September 5-6 in Boston. Learn more and register.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.