Transparency obligations
Providers and deployers of certain AI systems have to comply with specific transparency obligations. While some of these obligations have to be taken into account during the actual development of the AI system, most of them aim to ensure that the deployment of an AI system is disclosed and to make interaction with persons transparent.
The following table provides an overview of the key rules, which are set out in detail in Article 50 “Transparency obligations for providers and deployers of certain AI systems”.
Generally speaking, different transparency obligations apply depending on the purpose of an AI system. There are also exceptions where compliance or full compliance with the obligations is not required.
| Type of system | Transparency obligation(s) | Expections |
|---|---|---|
| AI systems intended to interact directly with natural persons | - Persons concerned must be informed that they are interacting with an AI system | - It is obvious from the context that the persons are interacting with an AI system - The AI system is authorised by law to detect, prevent, investigate or prosecute criminal offences and there are appropriate safeguards for the rights and freedoms of third parties |
| General-purpose AI systems (GPAI) generating synthetic content (audio, image, video or text) | - Outputs of the AI system must be marked in a machine-readable format - Outputs of the AI system must be detectable as artificially generated or manipulated - Technical solution must be effective, interoperable, robust and reliable | - The AI system performs an assistive function for standard editing - The AI system does not substantially alter the input data or the semantics - The AI system is authorised by law to detect, prevent, investigate or prosecute criminal offences and there are appropriate safeguards for the rights and freedoms of third parties |
| Type of system | Transparency obligation(s) | Exceptions |
|---|---|---|
| Emotion recognition system or biometric categorisation system | - Persons concerned must be informed of the operation of the AI system - The personal data must be processed in accordance with the relevant Regulations: (EU) 2016/679 and (EU) 2018/1725 | - The AI system is permitted by law to detect, prevent, investigate or prosecute criminal offences and there are appropriate safeguards for the rights and freedoms of third parties |
| AI systems generating or manipulating image, audio or video content constituting a deep fake | - Deployers must disclose that the content has been artificially generated or manipulated | - The AI system is permitted by law to detect, prevent, investigate or prosecute criminal offences |
| AI systems generating image, audio or video content forming part of an evidently artistic, creative, satirical, fictional or analogous work | - Deployers must appropriately disclose that content has been generated or manipulated | |
| AI systems generating or manipulating text with the purpose of informing the public on matters of public interest | - Deployers must disclose that the content has been artificially generated or manipulated | - The AI system is authorised by law to detect, prevent, investigate or prosecute criminal offences - The AI-generated content has undergone a process of human review or editorial control and a natural or legal person holds editorial responsibility |
Service
FAQ
FAQs: all you need to know about AI
Artificial Intelligence: Questions and Answers (European Commission)
Contact details
Use our online form to contact us if you haven't found the answer to your question (in German)
Events
AI-Café (in German)
Links and Downloads
Bundesnetzagentur's AI compliance compass (in German)
Hinweispapier: KI-Kompetenzen nach Artikel 4 KI-Verordnung (pdf / 357 KB) (in Geman)
EU guidelines on the definition of an artificial intelligence system
General-Purpose AI Code of Practice
Digital transformation among SMEs (in German)