Authors: Wendell J. Bartnick Vanessa A. Perumal
This alert explains why AI is different from other computer processing in a way that focuses on why lawmakers and others are so concerned about its use. Next, it will provide a brief overview of existing regulatory proposals and other thinking that will likely underpin future regulation of AI and certain uses of it. Last, the alert provides a checklist of potential risk mitigation considerations for organizations to incorporate into their AI governance program to help meet current legal obligations and prepare for future regulation.
AI is dynamic and data driven, not static and human driven
Businesses have been using computers, computer algorithms and automated data processing for generations. So, why is AI different?
AI is different in two material ways that potentially create risk. First, AI models (algorithms) are a set of rules dynamically created through data sets, not rules hard-coded by humans. For example, when developing AI using machine learning – currently the predominant method for creating AI models – humans create the empty shell AI model and then ‘train’ the AI model by feeding it large data sets. Commonly, as part of the training, subject matter experts (SMEs) correct the AI model when its output is incorrect. The quality of the AI model is highly dependent on the rules the AI model derives from the training data and the SMEs used to train the AI model. If the training data suffers from quality issues, the AI model outputs are likely to exhibit quality issues. Further, because data creates the rules in the AI Model, the developers (and users) will not always be able to identify or explain those rules.
Second, after an AI model is put into use, it can dynamically update itself based on such real-world use. Given the lack of control over real-world use, the data-created rules that make up the AI model can both increase and decrease in accuracy. Typically, there is not a smooth linear progression towards higher quality. It is also possible that the same inputs result in different outputs from one AI model use to the next. There have been news reports describing how popular AI models have increased in quality in some areas and decreased in quality in other areas over time. Therefore, the operation of AI-based computer programs is potentially unpredictable and unexplainable by the AI model developers.
For the purposes of AI governance, it is important to recognize that the use of AI models potentially creates risk because (1) their quality is primarily data-driven, not programmer-driven, and (2) they can dynamically update over time both positively and negatively. Given the perceived lack of control over how AI operates and the use cases to which AI has already been applied, many are concerned with regulating it.
The surge in AI-focused legislative and regulatory activity
Due to the unique characteristics of AI and perceived harms from it, there is a growing consensus around the world "to do something."
In the United States, politicians at federal and state levels have strongly voiced concerns. In recent months, there have been a series of congressional hearings, proposed guidelines and bills, enacted state consumer privacy laws with some potential application to AI, and the creation of committees at the federal and state level to study the use of AI and how best to regulate it. The current administration presented the White House Blueprint for AI Bill of Rights, aimed at establishing ethical guidelines, privacy protection and equitable AI technology access. Further, there has been much thinking and progress towards principles that underly responsible and ethical use of AI which are sure to inform future legislation and regulation. For example, the National Institute of Standards and Technology (NIST) published its Artificial Intelligence Risk Management Framework (AI RMF 1.0) in January.
Despite the political handwringing, regulators have made clear that AI is regulated. Federal agencies such as the FTC, Consumer Financial Protection Bureau, and Equal Employment Opportunity Commission have stated that they are well aware of the risks posed by AI and are prepared to enforce existing law in the context of AI use. The Food and Drug Administration has released a discussion paper and action plan associated with software as a medical device (SaMD) technologies that use AI to help ensure safety and effectiveness. States have continued enacting consumer privacy laws that restrict covered businesses’ use of technology to make decisions without human oversight that have a legal or other similarly significant effect on individuals (e.g., employment, housing, health and issuing credit).
The United States is not alone (and may not even be leading) in its efforts to regulate AI. The EU is moving forward with a draft AI Act discussed in a prior Reed Smith post. The United Kingdom has issued a policy paper: AI regulation: a pro-innovation approach. China has issued some regulations, including provisional rules that govern certain generative AI services.
Despite the many stakeholders and viewpoints around the world, there is significant agreement on the most important areas of focus, which are described below. Future legislation and regulation will likely involve some or all of these areas. Businesses will benefit from taking a multi-disciplinary approach to develop boundaries and policies and procedures that holistically govern their development and/or use of AI.
Considerations for AI governance teams
Businesses have comprehensive guidance even though regulators and the laws and regulations they enforce are catching up to the technological changes. Businesses that incorporate the following considerations into their AI governance program will significantly mitigate the risk from developing and/or using AI and likely be well-prepared for future regulation.
- Has the business considered how it can develop an AI governance program based on a risk management framework? NIST and other governments and organizations are publishing best practices for assessing risk from AI development and use. Many legislative proposals are adopting a risk-based approach where businesses that use ‘high risk’ AI must comply with additional obligations.
- Is the product or service using AI robust and resilient? How will the product or services using the AI maintain its level of performance under unintended but foreseeable circumstances, and will performance degrade in correlation with how unexpected the use is?
- How has the business taken steps to ensure the use of high-quality training and validation data? The data used to train an AI model directly correlates to the quality of the AI model. Therefore, businesses should consider whether that data was representative, relevant, accurate and complete. Businesses should also consider the data provenance to confirm that data used to train and validate an AI model does not violate any third-party rights.
- If an AI model is trained on personal data, has the business considered applicable privacy laws and rights? If a business relies on consent to use personal data, it should consider the impact of individuals withdrawing that consent.
- Using AI products or services could introduce a new security vulnerability for a business. What security measures are in place to prevent bad actors from using the AI to introduce malware or to poison the AI model with bad data that causes a reduction in the quality of the AI model, for example?
- When AI will be used in decision-making about individuals, particularly material decisions, how has the business evaluated and removed unwanted bias in the data before it is used to train the AI model so that the AI model does not perpetuate bias in the data?
- How has the business taken steps to understand and communicate the acceptable use boundaries for the product or service using AI to its staff? Relatedly, a business’ AI governance team can help the business adopt AI-driven products and services for the business’ intended use cases.
- Has the business taken steps to evaluate AI results to help ensure they are valid and accurate? A business may not be successful in arguing that it is not responsible for inaccurate results due to the use of a new or complicated technology.
- How will users or other individuals contest results generated from AI?
- Does the business have the ability either directly or via the vendor of the AI technology to promptly correct problems with the AI?
- What ongoing monitoring and quality control measures will be put in place by the business? Many AI projects have stumbled due to a failure to allocate and prepare the resources necessary keep the AI working properly.
- There are a host of transparency considerations related to the use of AI. Are users of AI-driven software (e.g., chatbots) aware that they are interacting with AI? Is the AI transparent about the confidence it has in its output so that users can determine whether the output is reliable?
- Is the AI explainable? Is the business able to understand how the AI generated its outputs? What criteria were used?
- Does the business have the ability to interpret the AI operations so that it can understand why a particular output was generated in a particular situation? This factor may be critical for compliance with laws because a business may not be able to explain why certain decisions, based on AI, were made.
Conclusion
AI-based products and services have shown a lot of promise. However, they can bring significant risk unless a business has the appropriate AI governance program and team in place with the authority and resources to mitigate that risk. The AI governance team should consist of members that can cover the many different relevant disciplines and ensure that the team considers the perspective of the various stakeholders (including consumers or employees affected by the use of AI). The AI governance team should also consider the many novel questions presented by the use of AI so that a business can be intentional about meeting its business objectives at an acceptable level of risk.
In-depth 2023-178