Automated decision systems of covered entities
The AAA defines “covered entities” as those that generate more than $50 million per year, possess or control the personal information of at least one million consumers or devices, or that act as data brokers as a primary business function. The bill empowers the FTC to establish a framework for evaluating the potential bias or discrimination against consumers that might result from covered entities’ use of automated systems, broadly defined as “computation process[es], including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making.”
Bias/discrimination impact assessment
Under the bill, covered entities are required to audit their processes for bias and discrimination and to timely correct identified issues. There have been growing concerns regarding racial, gender-based or political biases that can result from automated decision-making and the potential negative impacts from the use (or misuse) of artificial intelligence. Senator Wyden highlighted these concerns in the press release announcing the introduction of the bill. The Senator notes that “instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color” in a number of significant decisions that impact consumers, including home ownership, creditworthiness, employment decisions and even incarceration. The bill provides that in evaluating automated systems for bias, review of, among other things, the systems’ “training data” must take place to determine whether or how a given system’s biases manifest. Stated another way, the bill rests on the argument that an algorithm is only as good as the data by which it is informed.