We all know organizations’ use of technology with generative AI (GenAI) components has surged over the past year, jumping to 72% according to one study. However, many organizations are still in the early stages of developing a robust AI governance program, suggesting they may not yet be equipped to fully manage the risks associated with GenAI. Until U.S. AI laws and regulations mature, organizations can rely on guidance from the National Institute of Standards and Technology (NIST), part of the Department of Commerce, which previously published an AI Risk Management Framework (AI RMF).
This post highlights key takeaways and considerations from NIST’s supplemental publication, AI Risk Management Framework GenAI Profile (NIST AI 600-1), which focuses specifically on managing risks associated with the deployment and use of GenAI. NIST AI 600-1 adopts the definition of GenAI from the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, which is “the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.”
GenAI risks
According to NIST, challenges with risk management are aggravated by a lack of visibility into GenAI training data and the generally immature state of the science of AI measurement and safety. NIST AI 600-1 identifies 12 high-level potential risks posed by GenAI, applicable across all industries: (i) access to chemical, biological, radiological, or nuclear (CBRN) weapons or other dangerous materials or agents, including related information or capabilities; (ii) confabulations, which are confidently stated false content known as hallucinations or fabrications that can mislead or deceive users; (iii) dangerous, violent, or hateful content; (iv) data privacy impacts due to leakages and unauthorized use, disclosure, or de-anonymization of biometric or other personally identifiable information or sensitive data; (v) environmental impacts arising from high usage of computing resources; (vi) harmful bias in outputs that results in discrimination; (vii) human-AI configurations that can result in automation bias, over-reliance, or emotional entanglement with GenAI; (viii) information integrity issues, such as difficulty distinguishing between fact and fiction; (ix) information security concerns, such as lowered barriers for offensive cyber capabilities; (x) intellectual property issues, such as the unauthorized production of allegedly copyrighted, trademarked, or licensed content; (xi) easier to produce and access obscene, degrading, and/or abusive content; and (xii) value chain and component integration challenges, such as difficulty in vetting suppliers.
Risk mitigation options for GenAI
NIST proposes several actions that can be taken to address the GenAI risks, and organizes its guidance into four categories: govern, map, measure, and manage. Examples of these proposed actions are listed below.
1. The “govern” category focuses on aligning AI risk management with organizational principles and strategic priorities. Three sample actions NIST recommends are:
a. Align GenAI development and use with applicable laws, regulations, organizational policies, processes, and procedures.
b. Establish and maintain organizational roles, policies, and procedures for communicating and responding to GenAI incidents.
c. Enable routine GenAI testing, identification of incidents, and information sharing.
d. Ensure policies and procedures are in place that address AI risks associated with third parties, including risks of infringement of a third party’s intellectual property or other rights.
2. The “map” category focuses on establishing the context for identifying and framing risks. Example recommended actions are:
a. Document intended uses and purposes, beneficial uses, applicable laws, expectations, and the context in which the GenAI will be deployed, including the types of users and the impact on stakeholders.
b. Define specific tasks and methods to implement the functions that the GenAI will support (e.g., classifiers, recommenders, generative models).
c. Assess and document processes in line with technical standards and certifications.
3. The “measure” category focuses on developing processes to evaluate and improve the AI system’s performance. NIST recommendations include:
a. Validate, explain, and document inputs and outputs.
b. Evaluate the GenAI technology for fairness and bias.
c. Measure and examine risks associated with transparency and accountability.
4. The “manage” category focuses on prioritizing and addressing AI risks based on their potential impacts. Some recommendations are:
a. Monitor and measure GenAI technology, including activities for continual improvement.
b. Document risks and benefits from third-party resources.
c. Establish prompts and responses to GenAI in high-risk scenarios, and ensure the responses are developed, planned, and documented.
While NIST’s guidance is not binding on organizations, NIST’s AI RMF has been recognized by at least one U.S. state AI law as a resource to consult to demonstrate compliance. Organizations using GenAI should consider using NIST AI 600-1 (along with the NIST AI RMF) as part of their AI governance program, as it can help with risk analysis and identifying risk gaps more quickly. It will also likely influence future legislative and regulatory activity. In addition, aligning with the NIST guidance now will likely prevent a rush to comply when additional AI laws and regulations are enacted.
Client Alert 2024-200