The lawsuit suggests that UnitedHealth Group relied on an AI model it knew to be faulty, expecting that most customers would not challenge erroneous Medicare coverage decisions.
A recently-filed class action lawsuit alleges that Minnesota-based UnitedHealth Group uses a “faulty” artificial intelligence algorithm to deny coverage for Medicare patients requesting rehabilitative care.
According to the Star Tribune, the complaint was filed earlier this week in U.S. District Court fin Minnesota. The lead plaintiffs in the complaint are families of two patients who lived in north-central Wisconsin and sought long-term care in post-acute facilities.
Together, they claim that UnitedHealth’s Medicare Advantage program wrongfully denied payments for claims from the patients’ health providers, forcing their families to pay up to $70,000 in out-of-pocket expenses.
“This putative class action arises from defendants’ illegal deployment of artificial intelligence (AI) in place of real medical professionals to wrongfully deny elderly patients care owed to them under Medicare Advantage plans by overriding their treating physicians’ determinations as to medically necessary care based on an AI model that defendants know has a 90% error rate,” the complaint alleges.
UnitedHealth Group, based in Minnetonka, has since said that the lawsuit is entirely without merit.
“The tool is used as a guide to help us inform providers, families and other caregivers about what sort of assistance and care the patient may need both in the facility and after returning home,” a UnitedHealth spokesperson said in a statement. “Coverage decisions are based on CMS coverage criteria and the terms of the member’s plan.”
However, attorneys for the class say that UnitedHealth Group accords far greater priority to its artificial intelligence models than it admits—in many cases, disciplining and even terminating employees who refused to follow AI-guided determinations.
“The fraudulent scheme affords defendants a clear financial windfall in the form of policy premiums without having to pay for promised care, while the elderly are prematurely kicked out of care facilities nationwide or forced to deplete family savings to continue receiving necessary medical care, all because an AI model ‘disagrees’ with their real live doctors’ determinations,” the complaint says.
The lawsuit further contends that UnitedHealth knew, or should have known, that its algorithm had a high error rate. Attorneys have suggested that this likely worked to UnitedHealth’s advantage, since only a minority of patients—about 0.2%–file appeals to try and overturn their insurer’s decision.
“This demonstrates the blatant inaccuracy of the nH predict AI model and the lack of human review involved in the claims denial process,” the lawsuit states.
The complaint seeks compensation for a wide range of charges, including breach of contract, breach of good faith and fair dealing, and unjust enrichment.