When will the Act come into force?
As of last week, both the European Parliament and the Council of the EU have adopted their negotiating position on the draft. This will in turn lead to the next stage of the EU’s legislative procedure – the trilogues – which are informal tripartite meetings between representatives of the Parliament, the Council and the Commission conducted with the view to finalising the law. While the trilogues can be a lengthy process, it is possible that the Act will be adopted by the end of 2023. It should then come into force following a two-year implementation period.
Who does the Act apply to?
The Act will broadly apply to providers and deployers of AI systems. However, the current draft from the Parliament envisages that importers and distributors of AI systems, as well as authorised representatives of providers of AI systems, having their establishment in the EU will also be covered. In the current draft:
- The providers are actors who develop AI systems with a view to placing them on the market or putting them into service in the EU (e.g., OpenAI). This is irrespective of whether they are established within the EU. In addition, the current draft Act envisages that providers placing or putting into service AI systems outside the EU may also be covered if the developer or distributor of the AI system is located in the EU.
- The deployers (an expression which has been preferred to that of ‘users’) of AI systems, on the other hand, are natural or legal persons using AI in the context of professional activity. Deployers may use APIs to embed AI products within their own products or may simply use AI systems as internal tools. Providers and deployers of AI systems who are located outside the EU may also be covered where the output produced is to be used in the EU.
- Individuals using AI systems in the course of personal, non-professional activities have no obligation under the Act.
The Act will, however, not apply to specific categories of AI systems, including, for instance, research, testing and development activities regarding an AI system prior to the system being placed on the market or AI systems developed or used exclusively for military purposes.
When does the Act apply?
The Parliament currently defines an AI system as a “machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments”. If adopted, this broad definition could also capture some more traditional machine learning models.
The proposed approach to regulation is to have a sliding scale with the rules based largely on risk, so the higher the perceived risk, the stricter the rules. The Act will classify AI systems into three risk categories: unacceptable risk, high risk, and low or minimal risk.
- AI systems with an unacceptable risk will be prohibited. This will include the most intrusive uses of AI systems, for instance, emotion recognition systems in law enforcement or real-time remote biometric identification systems in publicly accessible places.
- AI systems with a high risk will be subject to specific requirements. What is key to establishing whether a system is high-risk is the purpose for which it is deployed. Originally, the Commission had envisaged a prescribed list of high-risk AI systems and that if a system fell into one of the categories, the obligations would apply. Examples include, for instance, biometrics, law enforcement or the management of critical infrastructure. Crucially, AI intended to be used by social media platforms, designated as very large online platforms under the Digital Services Act, in their recommender systems are also categorised as high-risk under the current proposal. The Parliament’s proposal has, however narrowed the high-risk categorisation to require an additional objective test to be applied to such AI systems, asking whether they pose a significant risk of harm to the health, safety or fundamental rights of a natural person.
- AI systems with a low risk will be largely unregulated.
What are the obligations for AI systems covered by the Act?
The key obligation that all AI systems, even those that do not fall under the high-risk category, will have to comply with is transparency. This effectively means that AI systems intended to interact with natural persons have to be designed and developed in such a way that the natural person is informed that they are interacting with an AI system, although the precise obligations can differ based on the type of AI system. According to the current proposal, where appropriate, the transparency obligation may include additional information, such as which functions are AI enabled, whether there is human oversight and who is responsible for the decision-making process.
The specific requirements for high-risk AI systems differ depending on the circumstances but may include, for example, registering the AI system in a designated EU database before it is placed on the market, keeping logs automatically generated by the AI system, implementing human oversight aimed at mitigating the risks, implementing quality management measures or post-marketing monitoring of the AI system’s performance. Another significant change now approved by the Parliament is to require that deployers carry out a fundamental rights impact assessment when using a high-risk AI system, which involves an assessment of the system’s impact in the specific context of use.
For organisations caught by the Act, strict financial penalties for non-compliance can be expected. The Parliament has proposed to increase the highest fine to €40 million or 7% of worldwide annual turnover.
Are there special rules for generative AI?
Originally, the Act did not cover AI systems without a specific purpose. This has changed recently with the addition by the Parliament of specific provisions to cover “providers of foundation models used in generative AI”. According to the Parliament, providers (but not deployers) of generative AI systems should be required to meet obligations around transparency, data governance, risk mitigation and registration. Importantly, the models would have to be designed with safeguards preventing the generation of content that breaches the law (including copyright law) and providers would need to publish summaries of copyrighted data used for training of the AI models.
What does the Act mean for businesses?
Anyone developing, deploying or professionally using AI in the EU or for an EU audience will, to some degree, be impacted by the Act. However, due to how the Act defines high-risk AI systems, it is likely that implementing AI in a number of commercial activities will not meet the threshold required for the strictest obligations. For instance, the development of AI systems for the purpose of use in music editing or video games will need to be distinguished from systems developed with the view to influence what we see on social media, or voters in political campaigns.
The rise in popularity of generative AI systems means, nonetheless, that more developers may come within the scope of the Act. If the current amendments are passed, those developers (but not the deployers) will likely need to adhere to specific requirements despite not falling under the high-risk category.
Download the flowchart below.
Client Alert 2023-137