A beginner's guide to 'instafraud'

A new form of AI-powered fraud is posing risks to healthcare bottom lines. Medicomp CEO David Lareau describes what it is, how to fight it – and how to help cautious executives concerned with the double-edged sword of artificial intelligence.
By Bill Siwicki
12:35 PM

David Lareau, CEO of Medicomp Systems

Photo: David Lareau

A recent investigation by The Wall Street Journal revealed that a staggering $50 billion has been pocketed by insurers from Medicare for diseases no doctor actually treated.

Perhaps one of the most concerning aspects of this explosion of fraud is the emergence of what industry insiders are calling "instafraud" – a practice where artificial intelligence, particularly large language models, is used to generate false or exaggerated medical documentation. 

This AI-driven review can instantly generate significant sums more per patient per year by fabricating or upcoding diagnoses that were never made by a healthcare provider.

We interviewed Medicomp CEO David Lareau to discuss the double-edged sword of AI technologies, which have tremendous potential to transform the industry – but can also be used by bad actors to create documentation that support upcoded diagnoses.

We talked with Lareau about instafraud, the role AI large language models play, how to fight instafraud, and what he would say to his peers and to C-suite executives in hospitals and health systems about executives who are not so sure about AI because of things like instafraud.

Q. Please describe in detail what instafraud, how it works and who exactly is benefitting.

A. Our Chief Medical Officer Dr. Jay Anders introduced me to the concept of instafraud as it relates to the fraudulent inflation of patient risk adjustment scores, sometimes by using large language models to produce visit documentation that contain diagnoses of conditions that the patient does not actually have, but for which the LLM can generate believable notes that are not true.

After taking a prompt engineering course, Dr. Anders saw how easy it is to send a list of diagnoses to an LLM and get back a complete note that purports to back up the diagnoses, without any evidence or investigation on the part of the provider. We fear this will be too easy and lucrative for providers and insurance companies to resist using to generate additional revenue.

We have prior experience with unscrupulous persons and enterprises using technology to "game the system." Our first such encounter was when the 1997 evaluation and management (E&M) guidelines were introduced and potential users asked, "Can you tell me the one or two additional data elements I would have to enter to get to the next highest level of encounter? Then I can just add them to the note, and that will increase payments."

More recently, people are asking how they can use AI to "suspect" for additional diagnoses to generate higher RAF scores, regardless of whether the patient has the condition. This approach is much more common than using AI to validate that the documentation is complete and correct for each diagnosis.

It is not only by using AI that enterprises and providers commit fraud, but by implementing policies to "find" potential diagnoses that a patient does not have and including them in the record. For example, having a home healthcare worker ask a patient if they ever don't feel like getting out of bed in the morning, and getting a response of "yes" might generate a diagnosis of depression, which qualifies for a higher RAF score.

Who doesn't sometimes not feel like getting out of bed. But without proper workup and diagnosis of other findings consistent with depression, the diagnosis of depression is potentially fraudulent.

Q. What role do AI large language models play? How do criminals get their hands on LLMs and have the data behind them to support the work that LLMs do?

A. LLMs have emerged as a central component in the phenomenon of instafraud within the Medicare Advantage system. These sophisticated AI models are being exploited to generate false or exaggerated medical documentation at an alarming scale and speed.

LLMs excel at processing and modifying vast amounts of patient data, creating convincing yet fabricated medical narratives that can be difficult to distinguish from genuine records. This capability allows for the instant generation of fraudulent diagnoses that can result in up to $10,000 more per patient per year in improper payments.

To be clear, the folks using LLMs and data to commit instafraud are not your garden variety "criminal."

Indeed, insurance companies are the primary perpetrators of this technology-driven fraud, and likely leverage their existing access to extensive patient data through their normal operations. They may be using commercially available AI systems, which have become increasingly accessible, or potentially developing proprietary systems tailored to this purpose.

This raises serious concerns about the misuse of patient data and the ethical implications of AI deployment in healthcare settings.

Q. How can instafraud be fought? And who is responsible for doing the fighting?

A. The responsibility for combating fraud is distributed among various stakeholders. Regulators and policymakers must implement stronger oversight and penalties to deter fraudulent behavior. Healthcare providers play a crucial role in validating diagnoses and challenging false documentation. Technology developers bear the responsibility of creating ethical AI systems with proper safeguards built in.

Insurance companies must commit to using AI responsibly and transparently, prioritizing patient care over profit. And auditors and investigators are essential in detecting and reporting fraudulent practices, serving as a critical line of defense against instafraud.

Ultimately, CMS is responsible for the administration of the Medicare Advantage program and must be more proactive in both detecting fraud and holding enterprises – and individuals – responsible for insurance fraud committed by their organizations.

Tools are available to review charts and coding for fraud, but without serious consequences for those supervising and committing fraud, enforcement efforts will lack sufficient teeth and financial penalties will continue to be viewed as a mere cost of doing business.

One initial step was establishing a whistleblower program for those reporting insurance fraud. But until there are very serious personal consequences – including possible prison time – the costs of fraud in the Medicare Advantage program will continue to escalate.

For an example of how this can be accomplished, consider the Sarbanes-Oxley Act of 2002, which requires CEOs and CFOs to certify their organization's financial statements. These executives can face significant penalties if they certify the company's books as accurate when they are not – ranging from prison sentence up to five years, steep fines, and other disciplinary action such as civil and criminal litigation. This has raised the stakes for those who would mislead investors and the public.

A similar requirement for those administering Medicare reimbursement policies and procedures within healthcare enterprises, coupled with whistleblower programs, might provide a more proactive approach to preventing intentional fraud, rather than merely attempting to detect it after the fact.

Q. What would you say to your peers and to C-suite executives in hospitals and health systems who tell you that AI is a double-edged sword and they're not so sure about it?

A. When addressing peers and C-suite executives concerned about the dual nature of AI in healthcare, it's important to emphasize several key points. AI should be viewed as a tool to augment, not replace, human expertise. The concept of "Dr. LLM" is not only misguided but potentially dangerous, as it overlooks the irreplaceable aspects of human medical care such as empathy, intuition and complex decision-making.

A balanced approach is necessary, one that leverages both the computational power of AI and the nuanced judgment of human healthcare professionals. This involves implementing technology-driven guardrails in conjunction with human collaboration to mitigate errors and build trust in AI systems. The focus should be on using AI to improve care delivery, not just to maximize billing or streamline administrative processes.

Healthcare organizations should embrace technologies that enable efficient, effective and trusted clinical use of LLMs, but always in a way that works alongside human clinicians rather than attempting to replace them. It's crucial to recognize the need for robust validation and trust-building measures when implementing AI in healthcare settings. This includes transparent processes, regular audits and clear communication about how AI is being used in patient care.

Ultimately, AI should be viewed as a powerful tool to enhance human decision making, not as a replacement for it. By adopting this perspective, healthcare organizations can harness the benefits of AI while mitigating its risks, leading to improved patient outcomes and a more efficient, ethical healthcare system.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.