Reed Smith Client Alerts

On April 19, 2021, the Federal Trade Commission (FTC or the Commission) published a blog post under “Tips & Advice,” titled, Aiming for truth, fairness, and equity in your company’s use of AI. In this post, the FTC describes some ways artificial intelligence (AI) is making a difference in many applications and industries. But the FTC also highlights certain dangers of AI, like unintended (or worse, intentional) discrimination based on race and other protected classes. The FTC provides guidance on how businesses can minimize these risks while reminding them all of the FTC’s broad enforcement powers under article 5. Simultaneously, various other financial regulatory agencies are requesting information and have opened a comment period for organizations to share current uses of AI. The FTC and other regulators, backed by actions of the Biden administration, are making it clear that there is a requirement of equity when it comes to AI applications.

Autoren: Catherine R. Castaldo Edward Fultz

What is the problem?

As it is possible for AI and its underlying algorithms – even inadvertently – to result in unfair, deceptive, biased, or erroneous outcomes, various federal agencies have taken different approaches to ensure that organizations are aware of and are taking necessary steps to limit or prevent these unfair outcomes. Financial regulators including the Federal Reserve Board, the Consumer Financial Protection Bureau, the Federal Deposit Insurance Corporation, the National Credit Union Administration, and the Office of the Comptroller of the Currency have issued a Request for Information, giving those organizations benefiting from the use of AI the opportunity to help shape policy and regulation by submitting comments. The FTC is taking a more direct approach. While its blog post touts self-accountability, it also mentions its section 5 authority and highlights its willingness to use this authority if the Commission feels it is warranted.

This problem is also a component part of the Biden administration’s efforts to eliminate systemic racism. During his first week in office, President Biden signed executive orders intended to increase racial equity in a variety of sectors – freeing federal agencies to reinstate rules that strengthen antidiscrimination policies in lending and housing. This includes the use of AI by banks and the housing industry to predict creditworthiness.

Who is responsible?

AI is now widely utilized by businesses in many applications. The varied utilizations across disparate industries create a question as to who might be held responsible for the failings of this new and ever-evolving technology. Is the responsibility in the supply chain, including programmers and software engineers? Or should it lie more squarely with the end user, who contracts, purchases, and utilizes the AI software in its processes? When it comes to AI software issues, regulators have previously targeted the end user, believing that the end user is ultimately responsible for the tool. Future enforcement actions could now also target the supply chain. While it may have been easier to attribute misuse of software tools to an end user, recent research has revealed that the disparate impact of seemingly race-neutral AI is the result of more fundamental flaws.