Fre­quent­ly Asked Ques­tions

Who is affected by the AI Act?

The Act applies to all public and commercial actors, both inside and outside the EU, if the AI system is sold in the EU or its use has an impact on people in the EU.

Important: The private use of AI systems is not affected by these regulations. (Article 2(10) AI Act)

The obligations apply to both providers and deployers of AI systems. However, there are exceptions: research and development work and testing of prototypes prior to market launch are not covered by these regulations. AI systems developed exclusively for military, defense, or national security purposes are also exempt, regardless of who uses them.

What is an AI system?

The AI Regulation deliberately defines AI systems broadly and in a technology-neutral manner to ensure their applicability to future developments.

An AI system is therefore:

„a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;“ (Article 3 No. 1 AI Act)

What are general-purpose AI models/GPAIs?

Currently, general-purpose artificial intelligence (GPAI) models are known as large language models (LLMs), which typically consist of artificial neural networks, such as GPT-4 from OpenAI, Gemini from Google, or LLaMA 2 from Meta Platforms. These models, which also include large generative AI models, are often used as the basis for AI systems and integrated into AI systems due to their wide range of applications.

What risk levels are specified in the Act?

The AI Act categorises AI systems according to their risk. This categorisation determines the requirements and conditions imposed on AI based on the risk posed by a system.
There are four different risk levels:

  • Unacceptable risk = prohibited AI systems
  • High risk = special requirements must be met for safe use
  • Limited risk = compliance with transparency obligations necessary
  • Minimal or no risk = largely unregulated

What are prohibited AI practices?

The AI Act defines eight AI practices that are prohibited as of 2 February 2025 due to their potential risks to European values and fundamental rights. Exceptions exist only in a narrow range of areas, in particular for law enforcement, where the use of real-time biometric identification is permitted under certain conditions.

What are high-risk AI systems?

A limited number of AI systems defined in the AI Act are classified as high-risk AI systems. Providers and operators of these high-risk AI systems are subject to strict requirements to ensure safety, human health and fundamental rights. The classification of an AI system as a high-risk AI system is based on its intended purpose.

What rules does the AI Act contain for labelling the results of generative AI?

Providers and deployers of certain AI systems must meet specific transparency requirements. Some of these requirements must already be taken into account during development. Most of these requirements aim to disclose the use of an AI system and make its interaction with humans transparent. Since generative AI models make it possible to generate content such as text, images and audio, it is important to indicate that this content was generated by AI and not by a human being.

Why are harmonised standards needed for high-risk AI systems?

Harmonised standards are essential to minimise the potential risks of high-risk AI systems, ensure their safety and reliability, promote transparency and accountability, while enabling innovation and fair competition. Standards are considered ‘harmonised’ when they have been published in the Official Journal of the European Union.

According to Article 40 of the AI Act, high-risk AI systems are presumed to comply with the relevant requirements of the AI Act if they refer to harmonised standards.

What transitional periods are in place?

Article 111 of the AI Act sets out some transitional rules specifically for AI systems that have already been placed on the market or put into service and general-purpose AI models that have already been placed on the market.

From the end of 2030, certain obligations will apply to AI systems that are part of IT systems created by EU law in the areas of freedom, security and justice – such as the Schengen Information System – provided that they were already offered or used before 2 August 2027.

Deployers of high-risk AI systems that were offered or used before 2 August 2026 only have to comply with the requirements of the AI Act if these AI systems have undergone significant design changes.

Deployers of general-purpose AI models (GPAI) that were introduced to the market before 2 August 2025 must comply with the requirements of the AI Act from 2 August 2027 onwards.

What is meant by AI literacy?

AI literacy essentially encompasses

  • the skills, knowledge and understanding required
  • to use AI systems competently, responsibly and safely, and
  • to be aware of the opportunities and risks (e.g. ethical, legal, social) associated with AI.

Which preparations is Bundesnetzagentur currently carrying out?

Establishment of the single point of contact: The single point of contact pursuant to Article 70 of the AI Act serves as an interface between the European Commission's AI Office and the competent authorities in EU countries, in particular the notifying authorities and market surveillance authorities. Further information can be found here.

Preparation for notification tasks: Bundesnetzagentur is preparing itself in terms of content and organisation for the activities and tasks involved in the notification of high-risk AI systems in accordance with Annex I No. 6 and Annex III No. 1 of the AI Act. Further information can be found here.

Expansion of existing market surveillance activities: Bundesnetzagentur is the competent market surveillance authority within the meaning of Directive 2014/53/EU on the making available on the market of radio equipment. It will therefore also perform the related activities and tasks under the AI Act for radio equipment in future. In addition, market surveillance is being prepared for the areas covered by Annex III of the AI Act. Further information can be found here.

Establishment of a Coordination and Competence Centre: In Germany, a central Coordination and Competence Centre is to be set up at the Bundesnetzagentur to support national market surveillance and notifying authorities in the complex application of the AI Act. Further information can be found here.

Pilot project to simulate an AI regulatory sandbox: In May 2025, Bundesnetzagentur launched a pilot project to simulate an AI regulatory sandbox in collaboration with the Hessian Ministry of Digital Affairs and Innovation and the Federal Commissioner for Data Protection and Freedom of Information. Further information can be found here.

Why is market surveillance important?

The aim of market surveillance is to ensure that products on the market do not pose unacceptable risks to the health and safety of consumers or to the environment.

It helps to create a level playing field for all economic operators and prevents companies from gaining unfair advantages over their competitors by disregarding regulations.

Mastodon