Risk levels
The AI Act is underpinned by a risk-based regulatory approach: the higher the risk posed by an AI application, the stricter the regulation.
the probability of harm occurring
and
the severity of the harm.
The AI Act contains various rules for classifying the level of risk of AI systems; the different levels of risk are described below. The requirements applicable to an AI system depend on the system’s level of risk.
The vast majority of AI applications do not present particular risks. These include email spam filters, personalised product suggestions, customer service chatbots and computer games using AI. These applications are subject to no or minimal rules such as transparency obligations.
However, some AI applications pose potential risks to life and health or to European fundamental rights. These applications are subject to strict regulation. They include AI systems for the health and energy sectors, road traffic and air transport, and decision-making for welfare benefit assessments or credit scoring.
These levels of risk only apply to AI systems. Special requirements apply to general-purpose AI models. More information can be found here.
| Unacceptable risk | Prohibited Some AI systems are classified as posing an unacceptable risk as they are incompatible with EU fundamental rights or pose a clear threat to the safety and health of people. These AI systems have therefore been prohibited completely in the EU since 2 February 2025. They include:
More information can be found here. |
| High risk | Only allowed with conformity assessment AI systems with a high risk can potentially cause serious harm if they malfunction or are misused. They are therefore subject to strict rules to ensure that their deployment does not have an adverse impact on the health and safety or fundamental rights of persons or on the environment. A distinction is made between two types of high-risk AI systems:
More information is available here. |
| Limited risk | Transparency obligations Certain AI systems are subject to less stringent regulation. These include the majority of commercial AI applications such as chatbots, recommendation algorithms or AI systems generating text, image or audio content. Most of these AI systems pose only a low risk. If these specific AI systems are implemented in high-risk AI systems, the transparency obligations apply in addition to the requirements for high-risk AI systems. Companies must nevertheless make sure that these systems are transparent and user-friendly due to the potential for risks relating to manipulation or deep fakes. Users must, for example, be informed that they are interacting with an AI system and not with a human. More inforamation is available here. |
| Minimal or no risk | No particular obligations, voluntary codes of conduct possible AI systems with a minimal risk such as spam filters are largely unregulated as they do not pose any significant risks to society or individuals. Companies are still recommended to voluntarily make sure that these systems comply with certain principles for trustworthy AI systems, for example fairness and transparency. |
Service
FAQ
FAQs: all you need to know about AI
Artificial Intelligence: Questions and Answers (European Commission)
Contact details
Use our online form to contact us if you haven't found the answer to your question (in German)
Events
AI-Café (in German)
Links and Downloads
Bundesnetzagentur's AI compliance compass (in German)
Hinweispapier: KI-Kompetenzen nach Artikel 4 KI-Verordnung (pdf / 357 KB) (in Geman)
EU guidelines on the definition of an artificial intelligence system
General-Purpose AI Code of Practice
Digital transformation among SMEs (in German)