Reed Smith Client Alerts

The European Commission has released a draft regulation on the use of AI, which it has proposed as a risk-based framework. The regulation prohibits certain practices when using AI, provides safeguards for high-risk applications and imposes heavy fines for failing to comply. We take a closer look at the proposed draft and discuss how it goes beyond the requirements of the GDPR.

A draft proposed European regulation on artificial intelligence (AI) (Regulation) was released on 21 April 2021, following the European Commission’s white paper “On Artificial Intelligence – A European approach to excellence and trust”, published in February 2020. The Regulation shows that the European Union is seeking to establish a legal framework for AI by laying down harmonised rules on AI and a coordinated plan with EU member states to strengthen AI uptake and innovation across the EU, whilst guaranteeing EU citizens’ rights and safety.


The Regulation applies to AI systems placed or put into service in the EU (irrespective of the location of the provider), users of AI systems located in the EU, and providers and users of AI systems located outside the EU if the output is used in the EU.

‘Artificial intelligence systems’ (AI systems) are defined in the Regulation as “software that is developed with one or more of the [specified] techniques and approaches and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. Unlike the General Data Protection Regulation (GDPR), which is the piece of EU legislation that comes closest to regulating AI, the Regulation is broader and covers more than just the processing of personal data.

The rules in the Regulation, which will invariably affect EU citizens, try to take a ‘human-centric’ and risk-based approach. They try to ensure that EU citizens can trust AI offerings, as demonstrated by the comment in the European Commission’s press release accompanying the Regulation: “On artificial intelligence, trust is a must, not a nice-to-have.”

Prohibited AI practices

AI systems will be prohibited if they are considered a clear threat to the safety, livelihoods and rights of individuals and violate the EU’s values and fundamental rights. This includes AI systems that:

(i) deploy subliminal techniques in order to materially distort a person’s behaviour;

(ii) exploit any of the vulnerabilities of a specific group of individuals due to their age, or physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group; and

(iii) are used by public authorities for ‘social scoring’ – specifically, using AI systems for the evaluation or classification of the trustworthiness of individuals based on their social behaviour or known or predicted personal or personality characteristics, if the social score leads to detrimental or unfavourable treatment.

Moreover, the use of real-time remote biometric identification systems (such as facial recognition) in public spaces for law enforcement purposes is prohibited. However, there are some narrow exceptions to this prohibition, which are clearly set out in the Regulation, such as where it is strictly necessary to search for a missing child or to prevent a specific and imminent terrorist threat. Such use of these exceptions must be authorised by a judicial or independent body to ensure there are appropriate safeguards.

A breach of the prohibited AI practices is subject to an administrative fine of up to €30 million or 6 per cent of the infringer’s global annual turnover in the previous financial year, whichever is higher, which is significantly higher than the current penalty under the GDPR.