HHS’s Plan for Promoting Responsible Use of Artificial Intelligence in Automated and Algorithmic Systems by State, Local, Tribal, and Territorial Governments in the Administration of Public Benefits (AI Plan for State and Local Governments) shows the agency’s current thinking on managing risk from AI use. While it is not directly applicable to them, private organizations should still review the AI Plan for State and Local Governments as a source of guidance to gain insight into the government’s expectations and priorities for the responsible implementation of AI.
Organizations that deferred the creation or full implementation of an AI governance program until they received clear requirements or expectations issued by HHS may now have what they need. HHS was required by Executive Order 14110 to publish a plan for using automated or algorithmic systems in government agencies’ implementation of public benefits and services. As a result, HHS published the AI Plan for State and Local Governments.
The plan explains how HHS allocates various AI use cases into risk categories and recommends steps that can be taken to mitigate potential harm from their use. HHS expressly sought to align its recommendations with existing AI frameworks, such as the NIST AI Risk Management Framework and the White House’s Blueprint for an AI Bill of Rights. HHS reminded state and local governments that using AI technologies does “not change responsibility to comply with existing applicable federal, state, and local laws, and regulations, including those addressing privacy, confidentiality, intellectual property, cybersecurity, human and civil rights, and civil liberties.” Although the AI Plan for State and Local Governments has a limited applicability scope, it provides valuable insights for private organizations that use or develop AI technologies.
A. Risk assessment
HHS is not focused on prohibiting the use of AI. Rather, its focus is on managing the risks in proportion to the potential harm presented by the use of AI.
The AI Plan for State and Local Governments creates three risk tiers for potential AI use:
- The AI technology is presumed to impact rights or safety (i.e., it directly controls outcomes or influences outcomes or human decision-making), which includes conducting workplace surveillance, health screening, and benefits administration.
- The AI technology may impact rights or safety, which includes public-facing AI-powered chatbots or phone support and internal-facing AI-powered chatbots used to answer customer questions.
- The AI technology is unlikely to impact rights or safety, which includes Interactive Voice Response technology for call centers that use voice recognition to assist callers in navigating menus and routing calls, transcribing documents from unstructured form to structured database fields, natural language processing, and summarizing case management files.
B. AI plan recommendations
After evaluating the risk, HHS encourages following certain best practices to manage that risk, which include:
- Focus on the following for effective AI adoption:
- IT infrastructure;
- high-quality data for use in training, testing, and operating automated or algorithmic systems;
- appropriate safeguards (e.g., cybersecurity, privacy, confidentiality, and civil rights);
- technical talent and staff training;
- sharing and collaboration in areas such as models and code, data sets, policies, best practices and lessons learned, technical resources, procurement, and implementation resources; and
- quality assurance policies and processes to build and measure trust in adopted automated and algorithmic systems.
- Rely on the NIST AI Risk Management Framework to implement the essential building blocks of AI responsibility and trustworthiness.
- Use a safety-by-design approach throughout the AI technology life cycle to prevent safety issues.
- Conduct impact assessments to weigh the benefits against the risks from the use of AI.
- Pilot AI systems in a real-world context before full deployment.
- Implement strong monitoring, evaluation, and safeguards.
- Ensure human facilitation and intervention where needed in the event that AI fails or causes harm.
- Consult workers and provide adequate training for all staff around developing, using, enhancing, and maintaining AI.
- Identify and assess impacts of AI technology on equity and fairness and mitigate algorithmic discrimination when it is present.
- Provide options to optout of the use of public-facing AI for a human alternative, wherever practicable.
- Establish clear, effective human oversight protocols, document those processes, train staff, and conduct periodic oversight to ensure adherence.
- Establish governance for a program-wide risk management framework covering use of AI technologies, with defined roles and responsibilities for clear understanding and communication of AI uses, assumptions, and limitations.
- Only acquire and use AI if a vendor provides all necessary information to evaluate suitability and risk.
There is no shortage of guidance on responsible AI implementation that mitigates the foreseeable risks, particularly with respect to harming individuals. HHS did not break significant new ground with the AI Plan for State and Local Governments, but it reinforced the importance of implementing AI technology only after making significant efforts to manage the risk from it. Even though the fundamental elements of AI risk mitigation are fairly standardized, actually formalizing an AI governance program that operationalizes the risk mitigation steps is more challenging and takes time. Organizations that are taking those steps now will mitigate risk and be in a much better position to comply with future laws and regulations.
Client Alert 2024-203