In addition to these more general laws that apply, but are not tailored, to AI, the EU currently builds on laws that are shaped for AI and currently making their way through the European legislative process, particularly the AI Act and the upcoming AI Liability Directive. It is fair to say that the EU approach to AI risk management is characterized by a comprehensive range of legislation tailored to specific digital environments.
From a territorial perspective and as a general rule, these EU laws apply when either the organization operating the AI system is based in the EU/EEA or if the users of the AI system or the subjects whose data is processed by the AI system are based in the EU/EEA. From a copyright perspective, EU copyright laws also apply if and to the extent protection of the copyright-protected work is sought in the EU/EEA.
GDPR and AI
GDPR will apply to AI if either the business operating the AI system is based in the EU/EEA or if the users of the AI system are located in the EU (art. 3 GDPR).
In addition to the data protection implications (see Data protection and privacy section), the GDPR contains two important articles related to algorithmic decision-making. First, the GDPR states that algorithmic systems should not make significant decisions affecting legal rights without human oversight. Second, the GDPR guarantees an individual’s right to “meaningful information about the logic” of algorithmic systems. As in many areas, the GDPR is not very clear in that respect, and thus there are many unanswered questions about this clause. How does the GDPR affect machine learning in the enterprise? In particular, how often may data subjects request this information, how valuable is the information to them and what happens when companies refuse to provide the information? As a result, the idea that the GDPR mandates a “right to explanation” from machine learning models has become a controversial subject.
EU AI Act
In April 2021, the EU Commission published a proposal for an EU Artificial Intelligence Act in the form of an AI Regulation, which would be immediately enforceable throughout the EU. This AI Regulation seeks to harmonize rules on artificial intelligence by ensuring that AI products are sufficiently safe and robust before they enter the EU market. It applies if operation of the AI system, its use or the use of its output has a connection to the EU/EEA (art. 2 AI Act) to:
- Providers that first supply commercially or put an AI system into service in the EU, regardless of whether the providers are located inside or outside the EU
- Users of AI located within the EU
- Providers and users located outside the EU, if the output produced by the system is used within the EU
The EU AI Act will be a particularly important component in many areas of EU AI risk management. Although the AI Act is not yet final, its main features can be analyzed from the European Commission’s April 2021 proposal, the EU Council’s December 2022 final proposal, the Parliament’s June 2023 final proposal, and available information on the political agreement reached between the Council and Parliament on December 9, 2023.
The AI Act has been presented as a “horizontal” piece of legislation by the EU Commission. The EU AI Act indeed sets out horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU.
However, there are also several limitations and exemptions in the AI Act. It actually implements a tiered system of regulatory obligations for a specifically enumerated list of AI applications. Some AI applications associated with minor risks, such as deepfakes, chatbots and biometric analytics, must make clear disclosures to affected individuals. Another group of AI systems with “unacceptable risks” would be banned outright. A third group of AI systems are considered high risk and may only be operated or used under certain restrictions that include logging, documentation, IT security and possibility for human intervention.
The following AI systems are considered intrusive and discriminatory and are banned:
- “Real-time” remote biometric identification systems in publicly accessible spaces
- “Post-remote” biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization
- Biometric categorization systems using sensitive characteristics (e.g., gender, race, ethnicity, citizenship status, religion, political orientation)
- Predictive policing systems (based on profiling, location or past criminal behavior)
- Emotion recognition systems in law enforcement, border management, workplace and educational institutions
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and the right to privacy)
High-risk AI systems, which are the most comprehensive and impactful of the classifications in the AI Act, are subjected to an advance conformity assessment. Providers of such high-risk AI systems are required to take extra steps, for example, to implement risk and quality management systems or to document the system’s output.
AI systems are considered high-risk if they have an adverse impact on people’s safety or their fundamental rights. This category includes areas that may do harm to people’s health, safety, fundamental rights or the environment, as well as AI systems that influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act).
Different categories of AI applications are classified as high-risk in the AI Act:
- AI systems that are intended to be used as safety components of a product or is itself a product covered by legislation listed in Annex II to the AI Act. This category of high-risk AI systems includes consumer products that are already regulated under the regulatory regime of the EU single market, for example, products such as medical devices, vehicles or toys. In general, this means that AI-enabled consumer products will still go through the existing regulatory process under the relevant product harmonization legislation and will not require a second, independent conformity assessment just for the requirements of the AI law.
- AI systems listed in a set of AI applications that include significant socially relevant decisions. The list includes, for example, real-time and post-remote biometric identification systems, systems used for hiring or educational access and credit scoring systems.
Unlike consumer products, the latter AI systems are generally considered new risks and have been largely unregulated. This means that the EU will need to develop specific AI standards for all of these different use cases. This is expected to be a significant implementation challenge, given the number of high-risk AI applications and the novelty of AI standards.
The AI Act provides for substantial fines in the event of noncompliance as well as other remedies, which can scale up to the higher of €35 million or 7% of the total worldwide annual turnover in the most serious cases.
The AI Act is still expected to enter into force in 2024, but organizations will have three years to get ready before the AI Act applies.
Digital Services Act and Digital Markets Act
The EU’s AI law is not the only significant law regulating AI risks. The EU has already passed the Digital Services Act (DSA) and the Digital Markets Act (DMA). Both have a similar extraterritorial scope as GDPR and, therefore, may also apply to organizations that are based outside the EU (see. art. 2 DSA and art 1 DMA) if their users are based in the EU/EEA.
The DSA, passed in November 2022, considers AI as part of its holistic approach to online platforms and search engines. By creating new transparency requirements, requiring independent audits and enabling independent research on large platforms, organizations will have to reveal much new information about the function and harms of AI in these platforms under the DSA. Further, the DSA requires large platforms to explain their AI for content recommendations, such as populating news feeds, and to offer users an alternative recommender system not based on sensitive user data. To the extent that these recommender systems contribute to the spread of disinformation, and large platforms fail to mitigate that harm, they may face fines under the DSA.
Similarly, the DMA is broadly aimed at increasing competition in digital marketplaces and considers some AI deployments in that scope. For example, large technology companies deemed to be “gatekeepers” under the law will be barred from self-preferencing their own products and services over third parties, a rule that is certain to affect AI ranking in search engines and the ordering of products on e-commerce platforms. The European Commission will also be able to conduct inspections of gatekeepers’ data and AI systems. While the DMA and DSA are not primarily about AI, these laws signal a clear willingness by the EU to govern AI built into highly complex systems.
Overall, we think it is fair to say that the approach of the European legislature to AI risk management has, in aggregate, a more centrally coordinated and comprehensive regulatory coverage than other governments, particularly the U.S. government. The EU’s legal framework covers more applications and includes more binding rules for each application. On the other hand, both the United States and the EU regulatory frameworks favor largely risk-based approaches to AI regulation and have described similar principles for how dependably AI should function.