Texas AG settles with clinical genAI company

The investigation, billed as the first of its kind, had alleged that Dallas-based Piece Technologies made "deceptive claims" about the accuracy of its artificial intelligence products, which are deployed at several hospitals in the Lone Star State.
By Andrea Fox
01:45 AM

Photo: Bo Zaunders/Getty Images
 

Texas Attorney General Ken Paxton announced a settlement with Dallas-based artificial intelligence developer Pieces Technologies resolving allegations that that the company's generative AI tools had put patient safety at risk by overpromising on accuracy.

WHY IT MATTERS
The Irving, Texas-based company uses generative AI to summarize real-time electronic health record data about patient conditions and treatments. Its software is used in at least four hospitals in the state, according to the settlement.

The company advertised a "severe hallucination rate" of less than one per 100,000, according to the settlement agreement.

While Pieces denied any wrongdoing or liability, and says it did not violate the Texas Deceptive Trade Practices-Consumer Protection Act, the AG settlement holds that the company must "clearly and conspicuously disclose" the meaning or definition of that metric and describe how it was calculated – or else "retain an independent, third-party auditor to assess, measure or substantiate the performance or characteristics of its products and services.

Pieces agreed to comply with the settlement provisions for five years but, said in a statement sent to Healthcare IT News by email Friday that the AG's announcement misrepresents the Assurance of Voluntary Compliance into which it entered.

"Pieces strongly supports the need for additional oversight and regulation of clinical generative AI," and signed the agreement "as an opportunity to advance those conversations in good faith."

THE LARGER TREND

As artificial intelligence – particularly genAI – becomes more widely used in hospitals and health systems, challenges around models' accuracy and transparency have become much more critical, especially as they find their way into clinical settings.

A recent study from the University of Massachusetts Amherst and Mendel, an AI company focused on AI hallucination detection, said different types of hallucinations occur in AI-summarized medical records, according to an August report in Clinical Trials Arena.

Researchers asked two large language models – Open AI's GPT-4o and Meta's Llama-3 – to generate medical summaries from 50 detailed medical notes. They found that GPT had 21 summaries with incorrect information and 50 with generalized information, while Llama had 19 errors and 47 generalizations.

As AI tools that generate summaries from electronic health records and other medical data proliferate, their reliability remains questionable.

"I think where we are with generative AI is it's not transparent, it's not consistent and it's not reliable yet," Dr. John Halamka, president of the Mayo Clinic Platform, told Healthcare IT News last year. "So we have to be a little bit careful with the use cases we choose."

To better assess AI, the Mayo Clinic Platform developed a risk-classification system to qualify algorithms before they are used externally.

Dr. Sonya Makhni, the platform's medical director and senior associate consultant for Mayo Clinic's Department of Hospital Internal Medicine, explained that, when thinking through the safe use of AI, healthcare organizations "should consider how an AI solution may impact clinical outcomes and what the potential risks are if an algorithm is incorrect or biased or if actions taken on an algorithm are incorrect or biased."

She said it's the "responsibility of both the solution developers and the end-users to frame an AI solution in terms of risk to the best of their abilities."

ON THE RECORD

"AI companies offering products used in high-risk settings owe it to the public and to their clients to be transparent about their risks, limitations and appropriate use," said Texas AG Ken Paxton in a statement about the Pieces Technologies settlement. 

"Hospitals and other healthcare entities must consider whether AI products are appropriate and train their employees accordingly," he added.

This article was updated with a statement from Pieces Technologies on September 20, 2024. Correction: An earlier version of the article contained a headline that indicated a lawsuit, but the settlement was the result of an investigation. We regret the error.

Andrea Fox is senior editor of Healthcare IT News.
Email: afox@himss.org

Healthcare IT News is a HIMSS Media publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.