Reed Smith Client Alerts

On April 10, 2019, U.S. lawmakers introduced the Algorithmic Accountability Act (the AAA). The bill is sponsored by U.S. Senators Ron Wyden (D-OR) and Cory Booker (D-NJ), and Representative Yvette Clarke (D-NY) sponsored a House of Representatives equivalent bill. The AAA empowers the Federal Trade Commission (FTC) to promulgate regulations requiring covered entities to conduct impact assessments of algorithmic “automated decision systems” (including machine learning and artificial intelligence) to evaluate their “accuracy, fairness, bias, discrimination, privacy and security.”

Authors: Stephanie Wilson

Automated decision systems of covered entities

The AAA defines “covered entities” as those that generate more than $50 million per year, possess or control the personal information of at least one million consumers or devices, or that act as data brokers as a primary business function. The bill empowers the FTC to establish a framework for evaluating the potential bias or discrimination against consumers that might result from covered entities’ use of automated systems, broadly defined as “computation process[es], including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making.”

Bias/discrimination impact assessment

Under the bill, covered entities are required to audit their processes for bias and discrimination and to timely correct identified issues. There have been growing concerns regarding racial, gender-based or political biases that can result from automated decision-making and the potential negative impacts from the use (or misuse) of artificial intelligence. Senator Wyden highlighted these concerns in the press release announcing the introduction of the bill. The Senator notes that “instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color” in a number of significant decisions that impact consumers, including home ownership, creditworthiness, employment decisions and even incarceration. The bill provides that in evaluating automated systems for bias, review of, among other things, the systems’ “training data” must take place to determine whether or how a given system’s biases manifest. Stated another way, the bill rests on the argument that an algorithm is only as good as the data by which it is informed.