A draft proposed European regulation on artificial intelligence (AI) (Regulation) was released on 21 April 2021, following the European Commission’s white paper “On Artificial Intelligence – A European approach to excellence and trust”, published in February 2020. The Regulation shows that the European Union is seeking to establish a legal framework for AI by laying down harmonised rules on AI and a coordinated plan with EU member states to strengthen AI uptake and innovation across the EU, whilst guaranteeing EU citizens’ rights and safety.
Introduction
The Regulation applies to AI systems placed or put into service in the EU (irrespective of the location of the provider), users of AI systems located in the EU, and providers and users of AI systems located outside the EU if the output is used in the EU.
‘Artificial intelligence systems’ (AI systems) are defined in the Regulation as “software that is developed with one or more of the [specified] techniques and approaches and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. Unlike the General Data Protection Regulation (GDPR), which is the piece of EU legislation that comes closest to regulating AI, the Regulation is broader and covers more than just the processing of personal data.
The rules in the Regulation, which will invariably affect EU citizens, try to take a ‘human-centric’ and risk-based approach. They try to ensure that EU citizens can trust AI offerings, as demonstrated by the comment in the European Commission’s press release accompanying the Regulation: “On artificial intelligence, trust is a must, not a nice-to-have.”
Prohibited AI practices
AI systems will be prohibited if they are considered a clear threat to the safety, livelihoods and rights of individuals and violate the EU’s values and fundamental rights. This includes AI systems that:
(i) deploy subliminal techniques in order to materially distort a person’s behaviour;
(ii) exploit any of the vulnerabilities of a specific group of individuals due to their age, or physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group; and
(iii) are used by public authorities for ‘social scoring’ – specifically, using AI systems for the evaluation or classification of the trustworthiness of individuals based on their social behaviour or known or predicted personal or personality characteristics, if the social score leads to detrimental or unfavourable treatment.
Moreover, the use of real-time remote biometric identification systems (such as facial recognition) in public spaces for law enforcement purposes is prohibited. However, there are some narrow exceptions to this prohibition, which are clearly set out in the Regulation, such as where it is strictly necessary to search for a missing child or to prevent a specific and imminent terrorist threat. Such use of these exceptions must be authorised by a judicial or independent body to ensure there are appropriate safeguards.
A breach of the prohibited AI practices is subject to an administrative fine of up to €30 million or 6 per cent of the infringer’s global annual turnover in the previous financial year, whichever is higher, which is significantly higher than the current penalty under the GDPR.
Obligations in relation to high-risk AI systems
These types of AI systems are not prohibited, but their classification as ‘high-risk’ will result in AI providers and users being subject to additional and specific obligations (as detailed below). The risk posed by an AI system is assessed on the basis of not only the function performed, but also its intended purpose and the manner in which it is used. It is considered high risk where it poses a high risk of harm to the health and safety, or fundamental rights, of individuals, taking into account both the severity of the possible harm and the likelihood of its occurrence. High-risk AI systems give rise to a number of obligations, including implementing a risk management system, eliminating or reducing risks through design and development, mitigation measures, training and testing.
Examples of high-risk AI systems identified in the Regulation include the use of AI in:
- Management and operation of critical infrastructure (e.g., transport)
- Product safety components
- Educational and vocational training
- Employment, staff management and access to self-employment
- Essential private and public services (e.g., credit scoring to obtain loans)
- Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
The Regulation requires providers to draw up detailed technical documentation in relation to AI systems, to provide users with concise, complete and clear instructions on the use of such systems, and to put in place measures to ensure appropriate human oversight, designed with appropriate levels of accuracy, robustness and cybersecurity.
Further, all high-risk AI systems will need to be registered in a publicly accessible EU database (to be established by the European Commission in accordance with the Regulation), which will include information inputted by AI system providers.
Limited risk and minimal risk
Where AI systems are not classed as high-risk, specific transparency obligations still apply. For example, providers will need to ensure users are aware that they are interacting with machines (e.g., chatbots for customer service) so that users can make informed decisions on whether they want to proceed. This is similar to the transparency obligation under the GDPR, which requires providers to inform data subjects of the existence of any automated decision-making by providing meaningful information about the logic involved in the decision-making.
Non-high-risk AI systems (e.g., the free use of applications such as AI-enabled video games or email spam filters) are not covered by the Regulation as they represent minimal risk or no risk to citizens’ rights and safety, and so companies and users will be free to use them. The European Commission believes most AI applications will fall into the non-high-risk category.
Supervision and sanctions
Enforcement of the Regulation will be left in the hands of individual member states, who will be responsible for appointing market surveillance authorities to supervise how the rules are applied nationally. To support the member states, the European Commission intends to establish a European Artificial Intelligence Board to ensure the Regulation is consistently applied across the EU and facilitate its implementation. Additionally, voluntary codes of conduct, as well as regulatory sandboxes to facilitate responsible innovation, are proposed for non-high-risk AI systems.
Member states will be responsible for laying down the rules on penalties, including administrative fines, for infringements of the Regulation. They will be able to issue fines for breaches of up to €30 million or 6 per cent of the infringer’s global annual turnover, whichever is higher – for example, where a company develops a prohibited AI system and places it on the market, or fails to put in place a compliant data governance programme for high-risk AI systems.
A failure to comply with the Regulation, other than as stated above, will be subject to an administrative fine of up to €20 million or 4 per cent of global annual turnover, whichever is higher. Supplying incorrect, incomplete or false information to notified entities and national competent authorities, may result in fines of up to €10 million or 2 per cent of global annual turnover, whichever is greater.
Next steps
The Regulation will inevitably affect all sectors of the European Union and its economy. It is still very much in its infancy – member states and the European Parliament will first need to adopt the Commission’s proposal on a European approach to artificial intelligence. Once adopted, it will apply directly across the EU, but this could take several years. The Regulation, once adopted, will hopefully create greater clarity in the area of AI and streamline the approaches taken by member states’ to regulate AI.
Our U.S. team have recently written about efforts in the United States to introduce AI regulation, with various government organisations issuing a request for information to help shape AI regulation in future. For more information, read our client alert at reedsmith.com.
Client Alert 2021-137