Prohibited practices
Article 5 of the AI Act prohibits the placing on the market, putting into service or use of certain AI systems for manipulative, exploitative and social control or surveillance practices that inherently violate fundamental rights and Union values. These prohibitions have been in place since 2 February 2025.
- Legal basis for the prohibitions
- Material scope
- Interplay between the prohibitions and high-risk AI systems
- Overview
- Harmful manipulation, deception and exploitation of vulnerable persons
- Social scoring
- Individual risk assessment and prediction of criminal offences
- Untargeted scraping of facial images
- Inference of emotions at the workplace and in education institutions
- Biometric categorisation for certain “sensitive” characteristics
- Real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement
- Links for further information
The application of Article 5 of the AI Act requires a case-by-case assessment taking due account of the specific situation in each individual case.
On 4 February 2025, the European Commission published its Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act). These guidelines have been available in all official languages of the European Union since 29 July 2025.
While the guidelines are non-binding, with authoritative interpretations reserved for the Court of Justice of the European Union (CJEU), they provide important guidance, legal explanations and practical examples and are designed to promote the uniform interpretation and application of the prohibitions.
Legal basis for the prohibitions
The AI Act is supported by two legal bases: Article 114 of the Treaty on the Functioning of the European Union (TFEU) (the internal market legal basis) and Article 16 of the TFEU (the data protection legal basis). Article 16 of the TFEU prohibits the use of remote biometric identification (RBI) systems for law enforcement purposes, biometric categorisation systems for law enforcement purposes and individual risk assessments for law enforcement purposes. All other prohibitions listed in Article 5 of the AI Act have their legal basis in Article 114 of the TFEU.
Material scope
The AI Act prohibits certain practices related to the placing on the market, putting into service or use of an AI system. As regards real-time remote biometric identification systems, the prohibition in Article 5(1)(h) only applies to their use.
According to Article 3(9) AI Act, the placing on the market of an AI system means “the first making available of an AI system [...] on the Union market”. According to Article 3(10), “making available on the market” means the supply of the system “for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge”.
While the “use” of an AI system is not explicitly defined in the AI Act, according to the Guidelines it should be understood to cover the use or deployment of the system at any moment of its lifecycle after having been placed on the market or put into service.
Interplay between the prohibitions and high-risk AI systems
According to the Guidelines, the AI practices prohibited by Article 5 of the AI Act should be considered in relation to the AI systems classified as high-risk in accordance with Article 6, in particular those listed in Annex III. This is because the use of AI systems classified as high-risk may qualify as a prohibited practice in specific instances if all the conditions under one or more of the prohibitions in Article 5 are fulfilled. Conversely, most AI systems that fall under an exception from a prohibition listed in Article 5 will qualify as high-risk.
The conditions for the prohibitions are cumulative, i.e. only if all of them are met is the AI system prohibited.
AI systems that are exceptionally not considered high-risk based on Article 6(3) AI Act, despite falling under a high-risk use case of Annex III, may still fall within the scope of the prohibitions of Article 5.
Overview
| Article | Prohibited practice | Examples |
|---|---|---|
| 5 (1)(a) | Harmful manipulation and deception |
|
| 5(1)(b) | Harmful exploitation of vulnerabilities |
|
| 5(1)(c) | Social scoring |
|
| 5(1)(d) | individual risk assessment and prediction of criminal offences |
|
| 5(1)(e) | Untargeted scraping of facial images |
|
| 5(1)(f) | Emotion recognition at the workplace and in education institutions |
|
| 5(1)(g) | Biometric categorisation for certain ‘sensitive’ characteristics |
|
| 5(1)(h) | Real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement |
|
Harmful manipulation, deception and exploitation of vulnerable persons
According to the Guidelines, these first two prohibitions aim to safeguard individuals and vulnerable persons from the significantly harmful effects of AI-enabled manipulation and exploitation.
These prohibitions aim to protect the right to human dignity (Article 1 of the Charter of Fundamental Rights of the European Union), which constitutes the basis of all fundamental rights and includes individual autonomy as an essential aspect. In particular, the prohibitions aim to prevent manipulation and exploitation through AI systems that reduce individuals to mere tools for achieving certain ends and to safeguard those that are most vulnerable and susceptible to harmful manipulation and exploitation.
Main components of the prohibition in Article 5(1)(a) of the AI Act – harmful manipulation
Several cumulative conditions must be fulfilled for the prohibition:
- the placing on the market, putting into service or use of an AI system
- that deploys subliminal, purposefully manipulative or deceptive techniques
- with the objective or the effect of materially distorting the behaviour of persons
- and causes or is reasonably likely to cause the persons significant harm.
[ENDE]
Main components of the prohibition in Article 5(1)(b) of the AI Act – harmful exploitation of vulnerabilities
Several cumulative conditions must be fulfilled for the prohibition:
- the placing on the market, putting into service or use of an AI system
- that deploys subliminal, purposefully manipulative or deceptive techniques
- with the objective or the effect of materially distorting the behaviour of persons
- and causes or is reasonably likely to cause the persons significant harm.
[ENDE]
The interplay between the two prohibitions requires a clear distinction between the specific contexts that each prohibition covers. This ensures that they are complementary to and not in contradiction with each other. The primary focus of the first prohibition is on techniques that operate below the threshold of conscious awareness or are purposefully manipulative or deceptive. By contrast, the primary focus of the second prohibition is the protection of particularly vulnerable persons such as children, persons with disabilities or persons in precarious social or economic situations. These persons are more susceptible to AI exploitation due to their circumstances and therefore require additional protection.
The Guidelines include several examples of use cases outside the scope of the prohibitions in addition to interpretations of further terms and details of the different techniques and different types of harm and their likelihood. They also address the interplay between the prohibitions and other EU law.
Social scoring
While AI-enabled scoring can bring benefits by promoting good behaviour and improving the safety, efficiency or quality of services, there are certain “social scoring” practices that treat or harm people unfairly and amount to social control and surveillance. The prohibition in Article 5(1)(c) of the AI Act targets such unacceptable AI-enabled social scoring practices. These assess or classify individuals or groups based on their social behaviour or personal characteristics and lead to detrimental or unfavourable treatment. This is particularly problematic if the data comes from multiple unrelated social contexts or the treatment is disproportionate to the gravity of the social behaviour. The “social scoring” prohibition applies to both public and private contexts and is not limited to specific sectors or fields.
The prohibition of “social scoring” aims to protect, in particular, the right to human dignity and other fundamental rights. These include the right to non-discrimination and equality, to data protection and to private and family life, as well as social and economic rights. It also aims to safeguard and promote the EU’s values of democracy, equality (including equal access to public and private services) and justice.
Main components of the prohibition in Article 5(1)(c) of the AI Act – social scoring
Several cumulative conditions must be fulfilled for the prohibition:
- the placing on the market, putting into service or use of an AI system
- for the evaluation or classification of persons over a certain period of time based on
- with the social score leading to either or both of the following:
- their social behaviour or
- known, inferred or predicted personal or personality characteristics
- detrimental or unfavourable treatment of persons in social contexts that are unrelated to the contexts in which the data was collected;
- detrimental or unfavourable treatment of persons that is unjustified or disproportionate to their social behaviour or its gravity.
[ENDE]
The Guidelines include several examples of use cases outside the scope of the prohibition in addition to further details of the individual components. They also address the interplay between the prohibition and other EU law.
Individual risk assessment and prediction of criminal offences
Article 5(1)(d) of the AI Act prohibits AI systems that assess or predict the risk of a person committing a criminal offence based solely on profiling or assessing personality traits and characteristics.
The prohibition does not apply if the AI system is used to support the human assessment and is therefore based on objective and verifiable facts directly linked to the criminal activity of a person. Such AI systems used by law enforcement authorities, or on their behalf, for assessing the risk of a person offending or re-offending are classified as high-risk AI systems (Annex III,(6)(d) AI Act) and must comply with all the requirements under the AI Act.
Main components of the prohibition in Article 5(1)(d) of the AI Act – individual risk assessment and prediction of criminal offences
Several cumulative conditions must be fulfilled for the prohibition:
- the placing on the market, putting into service or use of an AI system
- in order to assess or predict the risk of a natural person committing a criminal offence
- based solely on the profiling of a natural person or on assessing their personality traits and characteristics.
[ENDE]
A key criterion for the exclusion from the prohibition is that the AI system supports the human assessment and does not make the risk assessment itself, as is the case in the prohibited situations. The exclusion only applies if the human assessment is based on objective and verifiable facts directly linked to a criminal activity.
In addition to law enforcement authorities, private entities may also be covered by the prohibition in the AI Act, in particular when they are entrusted by law to exercise public authority and public powers for the prevention or prosecution of criminal offences or act on behalf of law enforcement authorities. The prohibition also applies to private entities assessing or predicting the risk of a person committing a crime if this is necessary for compliance with a legal obligation, as in the case of anti-money laundering or terrorism financing.
The prohibition does not apply to:
- location-based or geospatial/place-based crime predictions,
- AI systems that support human assessments based on objective and verifiable facts linked to a criminal activity,
- AI systems used for crime predictions and assessments in relation to legal entities,
- AI systems used for individual predictions of administrative offences.
The Guidelines include several examples of use cases in addition to further details of the individual components. They also address the interplay between the prohibition and other EU law.
Untargeted scraping of facial images
The untargeted scraping of facial images from the internet and from CCTV footage interferes with individuals’ rights to privacy and data protection and denies those individuals the right to remain anonymous. Recital 43 of the AI Act therefore justifies the prohibition established in Article 5(1)(e) based on the “feeling of mass surveillance” and the risks of “gross violations of fundamental rights, including the right to privacy”.
Main components of the prohibition in Article 5(1)(e) of the AI Act – untargeted scraping of facial images
Several cumulative conditions must be fulfilled for the prohibition:
- the placing on the market, putting into service or use of an AI system
- for the purpose of creating or expanding a facial recognition database
- by populating the database through AI tools for untargeted scraping
- with images from the internet or CCTV footage.
[ENDE]
The prohibition covers AI systems used to create or expand facial recognition databases. A facial recognition database compares a human face from a digital image or video frame with images in a database of faces to find likely matches. The database can be temporary, centralised or decentralised. The prohibition applies if a database can be used for facial recognition even if it is not the sole purpose of the database.
“Scraping” refers to using web crawlers, bots or other automatic means to extract data or content from different sources, including CCTV, websites or social media. These tools sift through databases, extract information and make use of the information for other purposes.
“Untargeted” refers to data being indiscriminately absorbed as with a vacuum cleaner without targeting specific persons or groups of persons. Compliance with opt-out arrangements such as robots.txt does not affect the untargeted nature of scraping.
The prohibition applies if the source of the facial images is the internet or CCTV footage. If a person publishes facial images of themselves on a social media platform, this does not mean that they have given their consent for those images to be included in a facial recognition database. Examples of CCTV footage include images acquired by cameras in places such as airports, streets or parks.
The Guidelines include several examples of use cases in addition to further details of the individual components. They also address the interplay between the prohibition and other EU law.
Inference of emotions at the workplace and in education institutions
Emotion recognition technology is quickly evolving and comprises different methods to detect, analyse and react to emotions from persons. The efficiency and accuracy of emotion recognition is frequently doubted. Recital 44 of the AI Act explains that there are “serious concerns” as expressions of emotions can vary considerably across cultures and situations and even within a single individual. Such systems often lack reliability and specificity and have limited generalisability.
Emotion recognition can lead to discriminatory outcomes and can be intrusive to the rights and freedoms of the persons concerned, in particular the rights to privacy, human dignity and freedom of thought. This is especially problematic in asymmetric relationships such as at the workplace or in education institutions, where both workers and students are in vulnerable positions. At the same time, emotion recognition can have benefits in specific use contexts, such as for safety and medical care.
Main components of the prohibition in Article 5(1)(f) of the AI Act – inference of emotions at the workplace and in education institutions
Several cumulative conditions must be fulfilled for the prohibition:
- the placing on the market, putting into service or use of an AI system
- to infer emotions
- at the workplace or in an education institution.
[ENDE]
The prohibition does not refer to “emotion recognition systems” as defined in Article 3(39) AI Act but to “AI systems to infer emotions of a natural person” as described in recital 44 AI Act. Nevertheless, it is important to construe the prohibition as having a similar scope as the rules applicable to other emotion recognition systems and to limit it to inferences based on a natural person’s biometric data.
Inferring emotions or intentions generally encompasses identifying them as a prerequisite. The prohibition therefore includes both AI systems identifying and AI systems inferring emotions or intentions. Identification means that the biometric data of a person (for example their voice or facial expression) is directly compared with an emotion that has been programmed in the emotion recognition system. By contrast, inferring in this context means that new information is generated by the system itself. This information is based not only on the data collected on the person, but also on other data sources and learning processes.
While recital 18 of the AI Act lists examples of emotions such as “happiness, sadness, anger, surprise”, the concept should be understood in a wide sense and not interpreted restrictively. Emotion recognition does not include the detection of physical states such as fatigue or illness.
The prohibition is limited to the areas of workplace and education institutions and is intended to address the imbalance of power in these contexts.
The prohibition contains an explicit exception for emotion recognition systems used for medical or safety reasons, such as systems for therapeutic use.
Biometric categorisation for certain “sensitive” characteristics
A wide variety of information, including sensitive information, can be extracted or inferred from biometric data, frequently without the knowledge of the persons concerned. This may lead to unfair and discriminatory treatment, for example when a service is denied because a person is considered to be of a certain ethnicity. AI-based biometric systems that categorise persons based on their sexual or political orientation, ethnicity or other characteristics violate human dignity and pose risks to fundamental rights such as privacy and non-discrimination. Such systems are therefore prohibited by Article 5(1)(g) of the AI Act.
Main components of the prohibition in Article 5(1)(g) of the AI Act – biometric categorisation for certain “sensitive” characteristics
Several cumulative conditions must be fulfilled for the prohibition:
- the placing on the market, putting into service or use of an AI system
- that is a biometric categorisation system
- categorising individual persons
- based on their biometric data
- to deduce or infer their ethnicity, political opinions, trade union membership, religious
- or philosophical beliefs, sex-life or sexual orientation.
[ENDE]
The prohibition does not cover the labelling or filtering of lawfully acquired biometric data sets, including in the area of law enforcement.
The categorisation of an individual by a biometric system is the process of establishing whether the biometric data of an individual belongs to a group with some pre-defined characteristic. It is not about verifying an individual’s identity but about assigning an individual to a certain category. For example, an advertising display may show different adverts depending on the individual that is looking at it based on their age or gender.
Two conditions must be cumulatively fulfilled for an AI system to fall outside the scope of the definition of biometric categorisation in Article 3(40) AI Act: it must be “ancillary to another commercial service” and “strictly necessary for objective technical reasons”. The Guidelines give the example of filters categorising facial or bodily features used on online marketplaces to allow consumers to preview a product on themselves. These filters can only be used in relation to the principal service, which consists in selling a product.
The prohibition only applies if natural persons are individually categorised. If the purpose or outcome of the biometric categorisation is not to categorise individual persons, the prohibition does not apply. This is the case, for example, if a whole group is categorised without looking at the individual.
The Guidelines include several examples of use cases in addition to further details of the individual components. They also address the interplay between the prohibition and other EU law.
Real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement
Recital 32 of the AI Act acknowledges that real-time remote biometric identification systems in publicly accessible spaces are intrusive to the rights and freedoms of the persons concerned. They affect the private life of a large part of the population, evoke a feeling of constant surveillance and may restrict the exercise of fundamental rights such as the freedom of assembly. Technical inaccuracies of these systems can lead to biased results and entail discriminatory effects, in particular with regard to age, ethnicity, gender or disabilities. The immediate impact and the limited opportunities for checks or corrections heighten the risks for the rights and freedoms of the persons concerned in the context of law enforcement activities.
If the use of such systems is strictly necessary to achieve a substantial public interest and if the situations are clearly and narrowly defined, the use outweighs the risks to fundamental rights (recital 33 AI Act). To ensure that such systems are used in a responsible and proportionate manner, they are subject to the safeguards and the specific requirements in Article 5(2) to (7) of the AI Act.
Main components of the prohibition in Article 5(1)(h) of the AI Act – real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement
Several cumulative conditions must be fulfilled for the prohibition:
- the AI system is a remote biometric identification system and
- the activity consists of the use of that AI system
- in real time,
- in publicly accessible spaces and
- for the purposes of law enforcement.
[ENDE]
The AI Act provides for three exceptions to the general prohibition on the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes:
- the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons;
- the prevention of a specific, substantial and imminent threat or a terrorist attack;
- the localisation or identification of a person suspected of having committed a criminal offence (Annex II).
The conditions and safeguards for the authorisation of such systems are laid down in Article 5(2) to (7). Article 5(1)(h)(i) to (iii) does not in itself constitute a legal basis for the use of real-time remote biometric identification systems. Rather, only a domestic law that fulfils the requirements in Article 5(2) to (7) can allow the use of real-time remote biometric identification systems. In the absence of such legislation, the use of real-time remote biometric identification for the objectives specified is prohibited as from 2 February 2025.
Links for further information
Service
FAQ
FAQs: all you need to know about AI
Artificial Intelligence: Questions and Answers (European Commission)
Contact details
Use our online form to contact us if you haven't found the answer to your question (in German)
Events
AI-Café (in German)
Links and Downloads
Bundesnetzagentur's AI compliance compass (in German)
Hinweispapier: KI-Kompetenzen nach Artikel 4 KI-Verordnung (pdf / 357 KB) (in Geman)
EU guidelines on the definition of an artificial intelligence system
General-Purpose AI Code of Practice
Digital transformation among SMEs (in German)