Trans­paren­cy obligations

Providers and deployers of certain AI systems have to comply with specific transparency obligations. While some of these obligations have to be taken into account during the actual development of the AI system, most of them aim to ensure that the deployment of an AI system is disclosed and to make interaction with persons transparent.

The following table provides an overview of the key rules, which are set out in detail in Article 50 “Transparency obligations for providers and deployers of certain AI systems”.

Generally speaking, different transparency obligations apply depending on the purpose of an AI system. There are also exceptions where compliance or full compliance with the obligations is not required.

Providers
Type of systemTransparency obligation(s)Expections
AI systems intended to interact directly with natural persons- Persons concerned must be informed that they are interacting with an AI system

- It is obvious from the context that the persons are interacting with an AI system

- The AI system is authorised by law to detect, prevent, investigate or prosecute criminal offences and there are appropriate safeguards for the rights and freedoms of third parties

General-purpose AI systems (GPAI) generating synthetic content (audio, image, video or text)- Outputs of the AI system must be marked in a machine-readable format
- Outputs of the AI system must be detectable as artificially generated or manipulated
- Technical solution must be effective, interoperable, robust and reliable
- The AI system performs an assistive function for standard editing
- The AI system does not substantially alter the input data or the semantics
- The AI system is authorised by law to detect, prevent, investigate or prosecute criminal offences and there are appropriate safeguards for the rights and freedoms of third parties
Deployers
Type of systemTransparency obligation(s)Exceptions
Emotion recognition system or biometric categorisation system- Persons concerned must be informed of the operation of the AI system
- The personal data must be processed in accordance with the relevant Regulations: (EU) 2016/679 and (EU) 2018/1725
- The AI system is permitted by law to detect, prevent, investigate or prosecute criminal offences and there are appropriate safeguards for the rights and freedoms of third parties
AI systems generating or manipulating image, audio or video content constituting a deep fake- Deployers must disclose that the content has been artificially generated or manipulated- The AI system is permitted by law to detect, prevent, investigate or prosecute criminal offences
AI systems generating image, audio or video content forming part of an evidently artistic, creative, satirical, fictional or analogous work- Deployers must appropriately disclose that content has been generated or manipulated
AI systems generating or manipulating text with the purpose of informing the public on matters of public interest- Deployers must disclose that the content has been artificially generated or manipulated- The AI system is authorised by law to detect, prevent, investigate or prosecute criminal offences
- The AI-generated content has undergone a process of human review or editorial control and a natural or legal person holds editorial responsibility
AI systems classified as high-risk AI systems are subject to the requirements and obligations for high-risk AI systems in addition to these transparency obligations. These transparency obligations also supplement any other transparency obligations laid down in Union or national law.
Mastodon