Read time: 4 minutes
Managed care organizations (MCOs), like most organizations, are inundated with information about how to deploy and use artificial intelligence (AI) in ways that reduce business and legal risk. Legal risk arises from federal, state and local laws and regulations. Current laws targeting AI are narrowly applicable to only parts of the insurance life cycle, including underwriting and pricing activities and medical necessity determinations in utilization management. Given AI use cases go beyond those limited areas and can impact product development, marketing, sales and distribution, policy servicing, fraud detection and other business operations, many state insurance regulators have adopted a version of the National Association of Insurance Commissioners’ (NAIC) “Model Bulletin on the Use of Artificial Intelligence Systems by Insurers” (AI Model Bulletin). Even MCOs that are not licensed in one of the 19 or more states where the insurance regulator has adopted a version of the AI Model Bulletin should consider using the resource as a foundation for their AI governance program.
The AI Model Bulletin sets expectations and guidelines for insurers’ deployment and use of “AI Systems,” which are defined as “machine-based system[s] that can, for a given set of objectives, generate outputs such as predictions, recommendations, content (such as text, images, videos, or sounds), or other output influencing decisions made in real or virtual environments.” This definition is consistent with other government publications related to AI. AI may (but does not need to) involve machine learning, and generative AI is a subset of AI Systems.
The AI Model Bulletin recommends:
- Adopting a tailored, written program for the responsible use of AI Systems (AIS Program). The AIS Program should address: (1) governance; (2) risk management controls; and (3) internal audit functions through the entire insurance life cycle and AI System life cycle. More specifically, the program should address and mitigate risks associated with violations of insurance regulatory standards and laws, such as unfair trade practice laws, that may arise from the MCO’s use of AI systems.
- Implementing a governance framework. The AIS Program should include a documented framework for the oversight of AI Systems, which prioritizes transparency, fairness, accountability and respect for proprietary rights. Governance also includes involving subject matter experts in decision-making, personnel training, monitoring and auditing the operation and performance of AI systems, and documenting clear roles and responsibilities for those accountable for the AI systems. The AI Model Bulletin also suggests looking at the governance guidance set forth in the AI Risk Management Framework from the Department of Commerce’s National Institute of Standards and Technology.
- Performing risk management. The AIS Program should include a documented risk assessment and internal controls that mitigate the risks. The risk assessment should cover the processes for evaluating and approving AI Systems; data use, quality, protection and retention practices; and management and oversight of predictive AI (i.e., those that involve “mining of historic data using algorithms and/or machine learning to identify patterns and predict outcomes that can be used to make or support the making of decisions”).
- A formal AI governance program can significantly reduce regulatory and business risks arising from the deployment and use of AI technology by MCOs and their suppliers
- The NAIC’s AI Model Bulletin provides a solid foundation for MCOs’ AI governance programs (and may be required in some states)
- States that have adopted a version of the AI Model Bulletin have made clear their intent that AI governance includes participation across the organization, including the Board