Twenty days after the publication of the Artificial Intelligence (AI) Act in the Official Journal of the EU: Entry into force date.
Action:
The AI Act enters into force.
Comment:
Publication is expected in April/May 2024.
Entry into force date plus six months.
Action:
Title I (General provisions) and Title II (Prohibited AI practices) become applicable.
Comment:
- Title I comprises the Act’s subject matter, scope, definitions and AI literacy provisions.
- Title II defines prohibited AI practices and includes, in particular:
- AI for subliminal techniques or purposefully manipulative or deceptive techniques;
- AI systems that exploit vulnerabilities of a person or a specific group of persons;
- biometric categorisation systems;
- social scoring systems;
- AI systems for assessing the risk of committing a criminal offence; and
- AI systems to infer the emotions of a natural person in the areas of workplace and education institutions.
Entry into force date plus nine months.
Action:
Codes of practice need to be ready.
Comment:
Codes of practice will be developed by the industry, with the participation of the member states (through the AI Board) and facilitated by the AI Office. The drawing up of the codes of practice should be an open process to which all interested stakeholders will be invited, both companies as well as civil society and academia. The AI Office will also evaluate these codes of practice and can formally approve them or, if they are inadequate to cover the obligations, provide common rules for implementing the obligations through an implementing act.
Entry into force date plus 12 months: Delayed entry into force.
Action:
The following become applicable:
- The chapter on notifying authorities and notified bodies (and member states must have appointed their notifying authorities);
- The title on governance;
- The title on general purpose AI (for AI systems placed on the market after that date); and
- The title on penalties (except for the fines for providers of General Purpose AI).
Comment:
- ‘Notifying authorities’ are each national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring.
- ‘Notified bodies’ are conformity assessment bodies notified to notifying authorities.
- The title on governance covers the EU AI Board, a scientific panel of independent experts and national competent authorities (defined as the notifying authority and the market surveillance authority in each member state).
- ‘General purpose AI’ is defined as AI that can serve a variety of purposes. Providers of such systems will be obliged to supply all the information and elements of high-risk AI systems to downstream providers so that they can comply with the respective requirements, including for the purpose of conformity assessment.
- Highest penalty (for prohibited AI) is an administrative fine of up to €35 million or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Entry into force date plus 24 months: Applicable date.
Action:
Act becomes applicable across the EU.
Comment:
- The Act will apply to high-risk AI systems (other than for safety) placed on the market or put into service before the applicable date only if, from that date, those systems are subject to significant changes in their design or intended purpose.
- At least one regulatory sandbox (i.e., regulatory tools allowing the testing of AI systems under the supervision of competent authorities) per member state should be operational.
- AI regulatory sandboxes are controlled frameworks set up by a competent authority that offer providers of AI systems the possibility to develop, train, validate and test, where appropriate, an innovative AI system in real-world conditions, pursuant to a sandbox plan for a limited time under regulatory supervision.
Entry into force date plus 36 months: Applicable date plus 12 months.
Action:
GPAI models placed on the market before the delayed entry into force date need to comply with the Act.
AI systems used as a safety component or standalone safety products or services become ‘high risk’.
Comment:
- Every GPAI model that is placed on the market before the entry into application of the provisions related to GPAI models will have two years after the date of entry into application of these provisions (three years in total) to be brought into compliance (irrespective of whether or not there is a substantial modification).
- High risk AI is broadly defined as AI posing a significant risk of harm to the health, safety or fundamental rights of natural persons, including by materially influencing the outcome of decision-making.
- Systems deemed high risk, beyond safety components, products or services, are listed in Annex III.
Entry into force date plus six years: Applicable date plus four years.
Action:
High risk AI systems being used by public authorities already, placed on the market or put into service before the applicable date must comply with the requirements of the Act.
End of 2030.
Action:
AI systems that are components of large-scale IT systems in the area of freedom, security and justice, placed on the market or put into service before entry into force date plus 36 months must comply with the Act.
Comment:
For example, AI systems that are components of large-scale IT systems in the area of freedom, security and justice, (e.g., Schengen, Visas and Eurodac).