AI and consumer protection
The first day of the hearings focused on consumer protection issues – specifically, whether there can be a generally accepted definition of “harm,” given that different people have different values. The speakers agreed that violations of individual privacy and societal norms of justice (for example, when machine-learning tools deliberately or inadvertently target protected classes or disadvantaged populations) cause first-line harm that must be remedied when detected. A more complex inquiry is whether bias in equation design, reporting and confirmation – the latter of which may create a “filter bubble” where algorithms display content depending on a user’s location, search history and other variables – facilitates strategic marketing or raises an ethical red flag. Another dilemma was whether price discrimination based on where consumers live and what technology they use (laptop versus mobile phone) to make purchases is fair, even if it’s legal.
AI and antitrust
On the second day, Bruce Hoffman, Director of FTC’s Bureau of Competition, stressed that the threat of “algorithm collusion” provided a challenge to regulators. The very purpose of algorithms is to direct outcomes by processing huge amounts of data. As the algorithms acquire more information about the market, competitors that use machine learning to set prices and production levels may make the same decisions through the “tacit collusion” of their systems. Absent proof of an anticompetitive agreement – conduct that, as we have previously reported,1 antitrust enforcers have already begun to challenge – Hoffman questioned when AI should trigger antitrust liability and who should be held accountable for alleged misconduct.
Panelists subsequently discussed whether AI interposes barriers to entry or presents new opportunities. Most concluded that a start-up with better AI tools could transform the market before incumbents had a chance to respond. Rather than stymie competition, AI will thus create new markets.
The role of industry
The panelists concluded that consumers will never be completely insulated from harm, however it is defined, because of AI’s inherent opacity. The lack of transparency enables companies with access to large swaths of (often personal) data to wield tremendous power over how data is used and online content is presented. Participants agreed that AI companies can and should address the “power asymmetry,” with one panelist advocating that if 2018 was “the year of action” for consumer safeguards, 2019 should be the year of “agency and accountability.” Panelists discussed several proposals about how to address the issues arising from this “power asymmetry,” and while no concrete action has been taken by the FTC in response to these ideas, these proposals may very well be the next step in enforcement or industry self-regulation in this rapidly developing area.
First, there was a call to proactively adopt industry-wide standards in security, audit and technical controls. Under these standards, businesses would disclose the precautions they take to keep data private and to ensure that algorithms and machine learning incorporate fairness metrics. Companies would also be required to document the measures they take and produce this evidence to agencies during an investigation.
Second, the expert panelists proposed to remedy the “power asymmetry” through education. Acknowledging that AI becomes less mysterious if people understand principles of machine learning, panelists reasoned that informing consumers exactly how a company uses their data would mitigate consumer concern. Panelists proposed that education could take the form of a notice to consumers about how the collected data is used and whether they can control how it is disseminated or opt-out.
Third, industry experts were less certain about how to address tacit algorithm collusion that could arise due to the black box nature of AI and the lack of bright-line rules guiding the use of AI. Some panelists suggested that companies should ban problematic features or that programmers should be required to implement code that mitigates the risk of collusion. Others wondered whether these measures are feasible given how heavily machine learning relies on external stimuli.
The current legal framework
Most panelists agreed that the broad jurisdiction conferred by section 5 of the FTC Act, which allows the agency to investigate “unfair or deceptive acts or practices in or affecting commerce,” is sufficient to reach consumer and market harms caused by AI and algorithms. The drawback of section 5 is that it offers little guidance on what conduct will trigger agency scrutiny, which is a particularly fraught issue for cutting-edge industries such as AI. The contrarians of the group argued that if AI is as transformative some suggest, new institutions and statutes are needed to grapple with the regulatory challenges. Both camps urged the FTC to provide guidance, through policy statements, on what constitutes an actionable practice, that is, practices that harm either consumers or competition in the relevant market.
Consensus on how to effectively regulate AI is as almost impossible as the technology is impenetrable. However, the FTC hearings offered useful insights into what measures companies and agencies may undertake to protect consumers and competition:
- The next frontier of antitrust enforcement may focus on algorithm collusion. The agencies have the daunting task of regulating an industry that their own staff may not understand. Establishing legal parameters to distinguish between lawful conscious parallelism and suspect conduct that falls short of a hard-core violation will be difficult.
- Businesses with AI expertise understand the power asymmetry that exists online and the potential for consumer harm. Rather than waiting for government intervention, companies can take proactive steps that empower consumers, such as education initiatives and opt-out provisions. However, industry self-policing will, at most, delay more intensive government oversight of AI practices.
- The broad jurisdiction of section 5 of the FTC Act will capture most if not all of the antitrust and consumer protection issues that may arise from AI, even as the technology continues to evolve. However, the industry would benefit from clear guidance on the practices the FTC may challenge.
How Reed Smith can help
Reed Smith’s Antitrust & Competition Team can assist you with AI, consumer protection and antitrust issues. Please feel free to contact one of the authors listed above.
- See Michelle A. Mantine & Karl Herrmann, Agreements and Algorithms Can Add up to Antitrust Liability.
Client Alert 2018-245