top of page
Writer's pictureDaniel

The EU AI Act in a Nutshell

Updated: Jun 21



TL;DR AI Act
The EU AI Act aims to regulate artificial intelligence (AI) on a product level. It is a risk-driven approach and categorizes AI systems into four risk levels:
unacceptable, high, limited, minimal
Unacceptable risk AI, e.g. profiling, will be banned. Main focus of the AI Act are high-risk AI systems, used in critical areas like healthcare and transportation. Those will face stringent requirements for risk- and quality management as well as documentation along the lifecycle.  Limited-risk AI will require transparency measures, while minimal-risk AI will have no additional obligations. The act also addresses General Purpose AI systems, e.g. LLMs, where we also find specific regulatory aspects.

Artificial Intelligence (AI) holds the potential to revolutionize various sectors, from healthcare to finance, promising unprecedented efficiencies and innovations. However, with great power comes significant responsibility. As AI technologies advance, they bring along risks such as privacy violations, bias, and unintended consequences that could harm individuals and society. Recognizing these challenges, the European Union has introduced the EU AI Act, the world's first comprehensive regulatory framework for AI. This legislation aims to ensure that AI systems are safe, transparent, and aligned with EU values, protecting consumers from potential harm. By striking a balance between fostering innovation and safeguarding fundamental rights, the EU AI Act seeks to harness the benefits of AI while mitigating its risks. Regulation is crucial to ensure that AI developments contribute positively to society and do not compromise safety, privacy, or fairness.


What is the AI Act?

The European Union (EU) AI Act (find more details here) represents the most comprehensive legal framework on artificial intelligence (AI) globally. Notably, its application is the same for all member states and is not confined to EU-based organizations alone. The Act extends to any entity that introduces AI systems to the EU market. The goal of the AI Act is to protect health, safety and fundamental rights of EU citizens, while making sure that the EU remains a driving force in the global AI race.

The backbone of the AI Act can be described by two fundamental principles:

  • Regulating AI on a system (application) level

  • Classifying AI systems in risk categories


Whether you are a provider, supplier, distributor or user of AI systems, it is therefore fundamental to understand whether your systems fall within the AI definition of the AI Act. Based on Article 3 of the AI Act, an AI system can be described as


“… a machine-based system that is designed to operate with varying degrees of autonomy and that, once deployed, can demonstrate adaptability and infer for explicit or implicit goals from the inputs it receives how to generate outputs such as predictions, content, recommendations or decisions that can affect physical or virtual environments…”

As soon as your AI system falls under this definition, the AI Act categorizes AI systems based on their associated risk into four different risk categories:

  • Unacceptable risk

  • High risk

  • Limited risk

  • Minimal risk


All four risk categories come with different requirements, which will be the subject of the next section.



Risk Categories, Requirements and Examples

The EU AI Act classifies AI systems into four distinct risk categories, each imposing specific regulatory obligations based on the associated level of risk:




AI Systems with unacceptable risk

AI systems with an unacceptable risk level, that pose harm to the core values and fundamental rights of EU citizens are prohibited in the EU. The following visualization shows which AI systems will fall under this category:



Systems that are forbidden in the EU are for instance real-time tracking systems in public places, various AI applications in the legal or law enforcement domain or facial recognition databases. Strictly military usage of such systems do not fall under the EU AI Act.



AI Systems with high risk

The EU AI Act designates specific AI systems as high-risk because of their potential to significantly affect individuals' lives, rights, and safety. These high-risk systems must adhere to more stringent compliance requirements to guarantee their responsible development and deployment.


While some categories are relatively clear (e.g., migration and asylum), others are formulated in a very general way. Therefore, we provide a more detailed view, to get a better understanding whether an AI system falls under the high risk category or not. Examples for high risk AI systems are:

  • Critical Infrastructure: AI systems to manage energy grids or transportation systems, as failure could have significant consequences

  • Education and Training: when used in education, systems qualify as high risk if they have a significant influence on a person's career path

  • Employment: AI used in recruitment, performance evaluation, etc. if it can disadvantage individuals

  • Access to Essential Services: AI used for credit scoring, insurance pricing, etc.

  • AI in Public Governance: AI used in influencing elections and voter behavior, law enforcement, etc.


If your AI system falls under this category, you will face specific obligations. Those obligations are, but not limited to:

  • Risk Assessment

  • Implement Risk Management Framework

  • Monitoring across the AI system lifecycle

  • Documentation of the development process

  • Implement human oversight


AI Systems with Transparency and Minimal Risk

Systems that fall under the transparency risk category need to inform the user, that a user is interacting with an AI system. Examples for a use case of this category is a customer facing chat bot in a call center setup.

Other AI systems, that pose minimal to non risk are not regulated in the AI Act as there is no harm to fundamental rights etc.


Timeline and Milestones of the AI Act

The beginning of the effort to regulate AI in the EU started in April 2021, when the EU Commission first published its proposal for what is now known as EU AI Act. The European Parliament passed the AI Act on 13 March 2024 (523 votes, in favor, 46 against and 49 abstentions). Official entry into force happened 20 days after publication in the Official Journal of the European Union. Key milestones and the timeline for step-by-step application of different AI Act element can be found in the following Figure:


Besides other important dates, we want to highlight two milestones crucial for different risk levels of AI systems and maybe critical for your business:

  • After 12 months: Regulation for General Purpose AI becomes applicable. This might have an impact on your GenAI applications, including LLMs and others

  • After 36 months: the regulation for high-risk systems will come into force



Sanctions for non-Compliance

The EU AI Act is designed to promote the safe, ethical, and reliable development and application of AI technologies. As such, it includes stringent enforcement measures for companies and/or organizations that fail to adhere to its requirements.

At the same time, the fines come into force following a specific timetable. Note, that sanctions for SMEs differ. In case of an SME, the fines will always be the lower number from the following fines structure:

  • Up to €35 million or 7% of global annual turnover, whichever is greater, for breaches of prohibitions

  • Up to €15 million or 3% of global annual turnover, whichever is greater, for breaches of the AI Act’s obligations

  • Up to €7.5 million or 1.5% of global annual turnover, whichever is greater, for providing incorrect information


As a general overview, most penalties will be applied 24 months after the EU AI Act comes into force. Given the timetable in Section 3, most penalties will be applied from mid 2026 on.

Exceptions are of course prohibited AI systems. Violation rules and fines will be active after 6 months. The same holds for General Purpose AI models, like LLMs. Violation and fine structure will be applied after 12 months.



What about GenAI & LLMs? – General Purpose

In the final version of the AI Act, the EU was specifically not only addressing AI Systems, but set a specific focus on General Purpose AI (GPAI). Here, we must distinguish between GPAI models and GPAI systems. GPAI models are defined based on criteria such as:

  • The models are trained on large amounts of data through various methods, such as self-supervised, unsupervised or via reinforcement learning

  • GPAI models may be placed on the market in various ways, including libraries, APIs, downloadable, etc.

  • These models may be further fine-tuned to fit to specific tasks


In general, there are two main categories of GPAI models:

  • Conventional GPAI models

    • Obligations: transparency, copyright, etc.

  • GPAI models with systemic risks:

    • > 10^25 FLOPs - everything larger than 10^25 FLOPS will be classified as systemic risks!

    • Obligations: risk management, red teaming, incident reporting, cybersecurity and others


This classification is currently under heavy discussion and leads to a lot of criticism. So far, only a limited number of GPAIs fall under the systemic risk category, some LLMs like GPT 4. Nevertheless, based on the current legislation, GPAIs with 10^25 FLOPs will be treated as systemic risk models and additional governance obligations must be strictly adhered to.



Conclusion and Next Steps

The EU AI act is a comprehensive piece of legislation that regulates the development and use of AI in the European Union. It was developed with international alignment, where international countries could agree on common principles.

The AI Act focuses only on those AI systems that pose a certain level of risk (high risk and unacceptable risk). For a few AI systems, like chat bots in customer service, there is a transparency obligation.

Nevertheless, given the tight timetable until final approval, we believe that there is still room for improvement and a huge chance to contribute. Open points and potential next steps include:

  • The development of ISO/DIN/etc. standards and technical guidelines that help to implement the AI Act in a correct way. This is currently in the making.

  • Clear definition what specific obligations actually expect and how to fit them: for example, risk management, quality management and documentation for high risk systems

  • Integration of the “horizontal” AI act into industry specific (vertical) regulations and standards (finance, medical, etc.)

  • Clear definition of AI systems that fall under the “high risk” category

21 views0 comments

Comments


Commenting has been turned off.
bottom of page