AI Is Being Used to Deny Your Health Insurance Claim. The Algorithm Has a 90% Error Rate and Nobody Cares.

UnitedHealth Group, the largest health insurer in the United States, was sued in 2023 for using an AI system called nH Predict to systematically deny insurance claims for elderly patients in post-acute care. According to the lawsuit, which was based on internal company documents, the system had a documented error rate of approximately 90 percent — meaning it was incorrectly denying coverage in 9 out of every 10 cases — and the company continued using it anyway because the denials saved money faster than the appeals process cost to administer.

UnitedHealth is not an outlier. It is the industry’s leading edge.

How Claim Denial AI Works

Health insurers process millions of claims annually. Human reviewers are expensive and slow. AI systems that can evaluate a claim against a database of coverage rules and prior authorisation requirements in seconds are enormously attractive from a cost perspective. Insurers frame these systems as efficiency tools — faster approvals for clear-cut cases, automated flagging of potentially fraudulent claims.

In practice, the systems are calibrated to a denial rate rather than an accuracy rate. An AI system told to minimise fraud will approve most claims because most claims are legitimate. An AI system told to reduce payouts will deny claims at a higher rate and rely on the appeals process to correct the errors. Most policyholders do not appeal. The ones who do face a process designed to be exhausting enough to discourage persistence.

The nH Predict Case in Detail

The lawsuit against UnitedHealth alleged that nH Predict was using data on average recovery times for specific conditions to set automatic discharge dates for patients in skilled nursing facilities. When a patient’s actual recovery was slower than the algorithm predicted — which happened frequently because individuals do not recover on statistical averages — the system would flag continued care as medically unnecessary and deny coverage.

The physicians treating these patients would submit appeals with clinical documentation supporting continued care. The system, and the human reviewers working with it, would override those appeals at a high rate. The patients were then discharged prematurely or forced to pay out of pocket for continued care they needed. Some were readmitted to hospitals shortly after discharge at greater cost to the broader healthcare system.

The Business Logic

The calculation that makes this viable is straightforward. A wrongful denial that a patient does not appeal saves the insurer the full cost of the claim. A wrongful denial that is successfully appealed costs the insurer the claim plus the administrative cost of processing the appeal. If the denial-to-successful-appeal ratio is high enough, the system is profitable even at a 90 percent error rate.

This is not a failure of the AI system. It is the system working as financially intended. The failure is in the regulatory framework that permits this use of automated decision-making in healthcare without requiring accuracy standards, transparency, or meaningful accountability when the system causes harm.

The same conflict of interest that corrupts AI research corrupts AI deployment in healthcare: the entities funding and deploying the systems have financial incentives to overstate their accuracy and understate their harm.

What to Do If Your Claim Is Denied

Appeal every denial. In writing. Request the specific clinical criteria used to make the denial decision — insurers in most US states are legally required to provide this. Get your doctor to submit clinical documentation directly contradicting the denial rationale. File a complaint with your state insurance commissioner. Most people do not do any of this because it is exhausting, and the insurer knows that.

The systemic solution requires regulators to impose accuracy standards on AI claim denial systems and to shift the burden of proof: a denial should require positive evidence that a claim is invalid, not a statistical prediction that it might be. That standard does not currently exist. Until it does, the algorithm will keep denying your claim and counting on you not to fight it.

ST

Synthetic Truth

Independent coverage of AI, work, and money. No corporate sponsorship, no stock portfolio, no incentive to mislead. Just honest analysis on where technology, power, and the economy are headed.

1 Comment

  1. Utah Just Let AI Prescribe Your Medication. The Medical Establishment Has No Answer. – The Synthethic Truth April 8, 2026

    […] decision-making has an established track record at this point, and it is not uniformly reassuring. Health insurers are already using AI algorithms with a documented 90% error rate to deny patient cla…, and the consequences range from delayed care to dangerous gaps in treatment. The difference in the […]

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

Free Newsletter

AI is changing everything.
Stay ahead of it.

Get the unfiltered truth about AI, jobs, and money — straight to your inbox. No hype. No fluff.

No thanks, I prefer to stay uninformed