July 2024 Blog

Big news: The AI Act (EU AI Act) has been published today

The EU AI Act (hereinafter referred to as "AI Act"), which has been developed over a long period of time and is the subject of much debate, has been published in the EU Official Journal today.

Even if the AI Act does not come into force until August 2, 2024 and its chapters (and therefore the resulting obligations) will only be applicable in stages (see below), companies should start to familiarise themselves with the content of this extensive and complex set of Acts now at the latest. In the following, we will give you a short overview, which is primarily intended to show why the AI Act is so highly relevant and why every company that has potential dealings with AI systems should be familiar with it (at least in broad outline).

The AI Act - what is it?

  • The AI Act is the world's first comprehensive AI law. It combines the areas of copyright, privacy and data protection, as well as IT security and product safety. It does not replace existing laws such as the GDPR, but rather supplements them with regard to AI systems, thus closing existing regulatory gaps.
  • As an EU Act, the AI Act is directly applicable in Germany; it therefore does not need to be transposed into national law.
  • The purpose of the law is to reduce the risks associated with the use and development of AI through Act: the use of AI systems is prohibited in certain application scenarios or made subject to technical and organisational requirements.

What are the objectives of the AI Act?

The AI Act pursues several objectives in the use and development of AI:

  • Protection of fundamental rights and freedoms: AI systems must not be discriminatory, manipulative or otherwise harmful.
  • Strengthening trust in AI systems: transparency and accountability obligations as well as the promotion of comprehensibility should make decisions made by AI systems understandable for users.
  • Promoting innovation and competitiveness: A harmonised legal framework will be created to promote investment in AI and its use by the economy.
  • Reducing the risks of AI use (in high-risk areas): Existing technical "best practices" will now be set out in regulatory terms (in the EU but de facto worldwide) and enforceable by law.

How does the AI Act pursue these goals?

The AI Act takes a risk-based approach and categorises the various AI systems into specific risk groups: The higher the risk of an AI system, the stricter the Acts of the AI Act. A distinction is made between prohibited AI practices, high-risk AI systems, AI systems with limited risk and so-called generative AI systems (such as ChatGPT).

  • Unacceptable risks are prohibited (e.g. social rating systems and manipulative AI).
  • The largest part (Art. 6 - 49 AI Act) deals with high-risk AI systems (see Annex II, III), which are regulated for their entire life cycle.
  • A smaller part of the AI Act deals with AI systems with limited risk, to which (lower) transparency obligations apply. Companies that develop, use or deploy these must above all ensure that end users know that they are interacting with AI (keyword chatbots and deepfakes).

Relevance: Wide scope of application and high fines

The AI Act is highly relevant as it has a broad geographical and material scope of application and also contains high fines for non-compliance with its provisions.

(1) Geographical scope of application

  • Like the (also applicable) GDPR: The AI Act applies not only to European organisations, but also to organisations outside the EU in certain circumstances (so-called "market place principle").
  • In short, the GDPR applies to all AI systems that have an impact on the EU internal market:
    • Organisations based in the EU that use AI systems;
    • Importers, distributors or manufacturers of AI systems in the EU;
    • Suppliers (inside or outside the EU) that place AI systems on the EU market; and
    • providers and users of AI systems (inside or outside the EU) whose output is used in the EU.
  • Most important exceptions to the scope of application: Exceptions for scientific research with AI, for the entire development process and for the open source sector in Art. AI ACT.

(2) Material scope of application

Both the addressees of the AI Act and the definition of "AI systems" are broad:

  • Target group: This is very broad: It includes actors in the following roles: Provider, manufacturer, operator, importer, distributor, authorised representative or data subject in the EU.
  • Definition of AI system: According to the legal definition, systems will always be categorised as AI systems if they:
    • are machine-based systems,
    • are designed to operate with varying degrees of autonomy
    • and which can demonstrate adaptability after deployment,
    • and which, for explicit or implicit goals, infer from the inputs they receive how to produce outcomes such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

(3) Self-Act mechanism and very high fines

The AI Act leaves the implementation of the required measures to the companies, but at the same time regulates high fines if they do not fulfil the requirements.

  • The AI Act does not regulate how the individual addressees can specifically implement the required measures. There is also no provision for official support. Just like the GDPR, a self-Act mechanism (with the corresponding risks and uncertainties for companies) is being established here.
  • The possible fines even exceed those of the GDPR:
    • The maximum possible fine for the use of prohibited AI is €35 million or up to 7% of total annual global turnover in the previous financial year - whichever is higher.
    • Fines for companies that fail to fulfil the compliance requirements for high-risk AI systems can amount to up to 3% of global turnover in the previous financial year or €15 million.
    • The provision of false, incomplete or misleading information can result in fines of up to 1% of the previous year's global turnover or €7.5 million.

Timing: When do which provisions of the AI Act apply?

Following the entry into force on August 2, 2024 of the AI Act, gradual measures will have to be taken into account:

February 2, 2025: Applicability of Title I (General Provisions) and Title II (Prohibited AI Practices).

  • Title I covers the subject matter of the law, the scope of application, the definitions and the provisions on AI competence
  • Title II defines prohibited AI practices and includes in particular AI for subliminal techniques or intentionally manipulative or deceptive techniques; AI systems that exploit vulnerabilities of an individual or a particular group of individuals; biometric categorisation systems; social assessment systems; AI systems for assessing the risk of committing a criminal offence; and AI systems for inferring the emotions of an individual in the workplace and educational settings.

August 2, 2025: In addition to Acts on governance, the title on AI for general purposes (for AI systems placed on the market after this date); and the title on confidentiality and sanctions will also come into force:

  • This means for companies using or developing "general purpose AI systems" (i.e. multi-purpose AI - also called "GPAI models") that the implementation of governance measures (esp. technical documentation, information/documents for downstream providers, guidelines for copyright compliance, publication of a summary of the content used for training) will now be required.
  • Obligations along the production or supply chain: Providers of such systems are obliged to pass on all information and elements of high-risk AI systems to downstream providers so that they can fulfil the relevant requirements, including for the purpose of conformity assessment.

August 2, 2026: Full applicability and enforcement across the EU, with a focus on high-risk systems.

  • However, it should be noted that the AI Act only applies to high-risk AI systems (with the exception of safety systems) that were placed on the market or put into service before the date of entry into force if these systems undergo significant changes to their design or intended use from that date.
  • AI systems placed on the market before 24 months must comply with the law.

August 2, 2027: Extension to other high-risk AI systems regulated by other EU legislation; obligations for high-risk AI systems under Annex II (AI in medical devices, machinery, etc.).

Need for action

Companies that work with AI systems in any way should familiarise themselves with the Acts and requirements of the AI Act and take the necessary (internal) precautions. Assistance on the topic of AI and data protection is currently available from the data protection authorities in the form of the DSK's orientation guide, which is available on the DSK website. However, concrete implementation requires a more detailed legal examination of the circumstances in each individual case. Please feel free to contact us if you have any questions about the GDPR and/or would like our support in implementing the resulting obligations for your company.

Subscribe to GvW Newsletter

Subscribe to our GvW Newsletter here - and we will keep you informed about the latest legal developments!