Key takeaways
- Recent litigation regarding use of AI by MCOs is directed at the use of automated claims adjudication as opposed to true use of AI in claims determinations.
- Enhanced transparency regarding the role AI plays in the claims adjudication process vis-à-vis claim reviewers is recommended.
- Many of the same best practices regarding implementation of automated claims adjudication processes should be applied regarding the use of AI in the claims review process.
Several recent class action lawsuits have been filed against managed care organizations (MCOs) by members and shareholders in federal district courts across the nation related to the use of algorithms (or, more precisely, artificial intelligence (AI)) as part of the claims review process. While the media and politicians may characterize these cases as AI cases, in fact, the cases often lack allegations that are specific to the use of AI. Rather, the allegations focus on decision-making by computer algorithms (whether AI or not) and not health care professionals. While the lawsuits may ultimately involve the use of AI, that is not a necessary condition for these claims as the cases are challenging the use of computer algorithms as part of the claims process.
Both members and shareholders are raising similar allegations – they want more transparency related to computer-driven coverage claims processing. Shareholders have requested records to gain insights into companies’ AI practices, and have sued if denied access. In addition to challenging transparency, members often will challenge the process used by MCOs, alleging that they are using improper criteria for adjudicating claims. The following are three common types of allegations included in the complaints:
- MCOs are engaging in illegal practices by substituting computer algorithms for medical professionals to systematically, wrongfully, and automatically deny benefits. Related allegations include using algorithms to override treating physicians’ conclusions and facilitate coverage decisions in mass without individualized scrutiny based on required criteria. Further, MCOs are alleged to have not provided statements listing bases for denials to members or doctors.
- MCOs are aware of algorithm inaccuracies (e.g., a 90% successful appeals rate following a denial) and are not using reasonable standards to evaluate claims decisions by algorithms. Instead, MCOs allegedly over-rely on algorithms, which has resulted in a significant increase in coverage denials. Plaintiffs describe algorithm outputs as “generic recommendations” that fail to take into account the member’s individual circumstances or statutorily required coverage determination criteria.
- MCOs allegedly employ algorithms in bad faith to cease payments for health care services, and continue to use algorithms to facilitate a “fraudulent scheme” that leads to a purported “clear financial windfall.” Plaintiffs allege that this activity includes directing and incentivizing employees to align their decisions with the algorithm. Plaintiffs also allege that MCOs know that they will not face accountability for wrongful denials, because so few individuals appeal denials. Further, there is allegedly a deliberate failure by MCOs to meet statutory, common law, and contractual obligations for doctor-led coverage determinations.
General considerations for health plans to reduce risks in using AI in the claims review process
While many of these cases appear on their face to raise novel issues regarding the use of new technology in claims processing, many of the same best practices for mitigating litigation risk regarding the claims process generally can be effective here as well. Indeed, many of the processes, procedures, and best practices utilized for claims processing for coding and claims edits, can similarly be applied regarding the use of AI in the claims review process. The following are some recommendations for MCOs to consider:
- Consider developing and enhancing written policies and procedures to document a computer algorithm’s role in evaluating coverage claims, including monitoring compliance with such policies and procedures and applicable law. Document standards used by computer algorithms to make determinations, as well as the computer algorithm’s role in the entire claims processes, including human involvement in decision-making. It is critical that the role of the computer algorithm expressly accounts for an individual’s circumstances when making medical necessity determinations and that communications accurately reflect the consideration of such circumstances.
- Consider enhancing written policies and procedures that govern the involvement of physicians in claims coverage denials made by computer algorithms, focusing in particular on physician involvement in reviewing patient files as part of denying claims.
- Consider implementing the computer algorithm in a manner that can record its decisions in a detailed, explainable manner, including how it generates outputs. Consider documenting publishable explanations for how the computer algorithms work in a manner that can be produced to regulators and in discovery to demonstrate that decision-making algorithms are accurate and do not cause the MCO to make decisions in bad faith.
- Consider recording and evaluating computer algorithm error rates, including comparing error rates to human-made decisions. In addition, consider recording and evaluating computer algorithm error rates, including comparing error rates to human-made decisions. Doing so might indicate that the computer algorithm is more accurate than human reviewers.
- Consider how to handle computer algorithm errors through additional fine-tuning of the algorithm/AI model or processing certain types of claims only with human reviewers through a separate process. Given the lack of trust in AI and other computer algorithms, MCOs should consider how they are correcting errors and otherwise updating computer algorithms to improve results and processes.
- Consider use of computer algorithms to provide initial claims processing determinations and to have a physician reviewer confirm any claim denials prior to finalizing a claim adjudication. Use of physician reviewers to confirm a claim denial reduces risk regarding allegations of lack of individualized review of a claim, but still allows an MCO to utilize the efficiencies of the AI process in helping adjudicate claims.
- Consider transparency in claims-related decision-making and communicating information to members and physicians about claims coverage decisions. Much of the concern from regulators and potential plaintiffs is the lack of information about how computer algorithms make decisions – there is a lack of trust. Additional transparency about how computer algorithms and AI are utilized is an important mechanism for increasing trust.
- Consider storing records that connect the computer algorithm results, criteria used by the computer algorithm, and the clinical evidence relied upon in making medical necessity determinations. To defend against denials rendered by computer algorithms, it will be important to link the external clinical guidance with the coverage criteria used by the tools and the other information proposed in number 3 above.
- In the event AI is used, consider enhancing the AI governance team and program. We previously wrote about important aspects of AI governance. Governance should include ongoing auditing and monitoring of the use and quality of AI results, ensuring implicit bias does not adversely affect results, and addressing privacy and data security concerns.
Client Alert 2023-263