Definitions
This page provides details of selected terms relating to the AI Act.
- AI system
- Operators within the meaning of the AI Act
- Terms relating to authorities and institutions
- Terms relating to AI systems and their deployment
- Terms relating to notification
- Terms relating to biometrics
AI system
| Article | Term | Definition |
|---|---|---|
| 3(1) | AI system | a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments |
The European Commission published its Guidelines on the definition of an artificial intelligence system on 6 February 2025. The Guidelines are not legally binding and will be updated as and when necessary.
According to the Guidelines, the definition comprises the following seven main elements:
- machine-based
- varying levels of autonomy
- adaptiveness
- explicit and implicit objectives
- inference
- outputs
- interaction.
Operators within the meaning of the AI Act
| Article | Term | Definition |
|---|---|---|
| 3(3) | provider | a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge |
| 3(4) | deployer | a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity |
| 3(5) | authorised representative | a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation |
| 3(6) | importer | a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country |
| 3(7) | distributor | a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market |
| 3(8) | operator | a provider, product manufacturer, deployer, authorised representative, importer or distributor |
| 3(68) | downstream provider | a provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations |
Terms relating to authorities and institutions
| Article | Term | Definition |
|---|---|---|
| 3(19) | notifying authority | the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring |
| 3(21) | conformity assessment body | a body that performs third-party conformity assessment activities, including testing, certification and inspection |
| 3(22) | notified body | a conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation |
| 3(26) | market surveillance authority | the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020 |
| 3(45) | law enforcement authority | (a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or (b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security |
| 3(47) | AI Office | the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and general-purpose AI models, and AI governance, provided for in Commission Decision of 24 January 2024; references in this Regulation to the AI Office shall be construed as references to the Commission |
| 3(48) | national competent authority | a notifying authority or a market surveillance authority; as regards AI systems put into service or used by Union institutions, agencies, offices and bodies, references to national competent authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data Protection Supervisor |
| 3(55) | AI regulatory sandbox | a controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision |
| Reference | Term | Definition |
|---|---|---|
| Recital 20, Article 65, Article 66 | Artificial Intelligence Board (“Board”) | The European Artificial Intelligence Board should support the Commission to promote AI literacy tools, public awareness and understanding of the benefits, risks, safeguards, rights and obligations in relation to the use of AI systems. The Board is composed of one representative per Member State. The European Data Protection Supervisor participates as observer. The AI Office also attends the Board’s meetings, without taking part in the votes. Other national and Union authorities, bodies or experts may be invited to the meetings by the Board on a case by case basis, where the issues discussed are of relevance for them. |
| Article 67(1) and (2) | advisory forum | An advisory forum must be established to provide technical expertise and advise the Board and the Commission, and to contribute to their tasks under this Regulation. The membership of the advisory forum must represent a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society and academia. The membership of the advisory forum must be balanced with regard to commercial and non-commercial interests and, within the category of commercial interests, with regard to SMEs and other undertakings. |
| Article 68 | scientific panel of independent experts | The scientific panel consists of experts selected by the Commission on the basis of up-to-date scientific or technical expertise in the field of AI. The scientific panel advises and supports the AI Office. |
| Article 70(2) | single point of contact | Member States must designate a market surveillance authority to act as a single point of contact for the Regulation and must notify the Commission of the identity of the single point of contact. The Commission must make a list of the single points of contact publicly available. |
Terms relating to AI systems and their deployment
Detailed information about the category of high-risk AI systems
Information about prohibited AI practices
| Article | Term | Definition |
|---|---|---|
| 3(2) | risk | the combination of the probability of an occurrence of harm and the severity of that harm |
| 3(9) | placing on the market | the first making available of an AI system or a general-purpose AI model on the Union market |
| 3(10) | making available on the market | the supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge |
| 3(11) | putting into service | the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose |
| 3(12) | intended purpose | the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation |
| 3(13) | reasonably foreseeable misuse | the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems |
| 3(14) | safety component | a component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property Recital 55 also describes safety components in connection with the management and operation of critical infrastructure as systems protecting “the physical integrity of critical infrastructure or the health and safety of persons and property”. By contrast, components intended to be used solely for cybersecurity purposes should not qualify as safety components. |
| 3(15) | instructions for use | the information provided by the provider to inform the deployer of, in particular, an AI system’s intended purpose and proper use |
| 3(18) | performance of an AI system | the ability of an AI system to achieve its intended purpose |
| 3(23) | substantial modification | a change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the compliance of the AI system with the requirements set out in Chapter III, Section 2 is affected or results in a modification to the intended purpose for which the AI system has been assessed |
| 3(29) | training data | data used for training an AI system through fitting its learnable parameters |
| 3(30) | validation data | data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process in order, inter alia, to prevent underfitting or overfitting |
| 3(31) | validation data set | a separate data set or part of the training data set, either as a fixed or variable split |
| 3(32) | testing data | data used for providing an independent evaluation of the AI system in order to confirm the expected performance of that system before its placing on the market or putting into service |
| 3(33) | input data | data provided to or directly acquired by an AI system on the basis of which the system produces an output |
| 3(60) | deep fake | AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful |
Terms relating to notification
Notification under the AI Act means the official communication to the European Commission and the other EU Member States that a conformity assessment body has been verified and designated. Conformity assessment under the AI Act may only be carried out by these notified bodies. In accordance with Article 6 of the AI Act, notified bodies must be involved if an AI system is considered to be high-risk in order to assess whether the high-risk AI system is in conformity with the requirements of the AI Act.
| Article | Term | Definition |
|---|---|---|
| 3(19) | notifying authority | the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring |
| 3(20) | conformity assessment | the process of demonstrating whether the requirements set out in Chapter III, Section 2 relating to a high-risk AI system have been fulfilled |
| 3(21) | conformity assessment body | a body that performs third-party conformity assessment activities, including testing, certification and inspection |
| 3(22) | notified body | a conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation |
| 3(24) | CE marking | a marking by which a provider indicates that an AI system is in conformity with the requirements set out in Chapter III, Section 2 and other applicable Union harmonisation legislation providing for its affixing |
Terms relating to biometrics
The deployment of biometric AI systems is considered to be particularly sensitive and is therefore subject to strict rules to protect fundamental rights and privacy.
| Article | Term | Definition |
|---|---|---|
| 3(34) | biometric data | personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data |
| 3(35) | biometric identification | the automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database |
| 3(36) | biometric verification | the automated, one-to-one verification, including authentication, of the identity of natural persons by comparing their biometric data to previously provided biometric data |
| 3(37) | special categories of personal data | the categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725 |
| 3(38) | sensitive operational data | operational data related to activities of prevention, detection, investigation or prosecution of criminal offences, the disclosure of which could jeopardise the integrity of criminal proceedings |
| 3(39) | emotion recognition system | an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data |
| 3(40) | biometric categorisation system | an AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless it is ancillary to another commercial service and strictly necessary for objective technical reasons |
| 3(41) | remote biometric identification system | an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database |
| 3(42) | real-time remote biometric identification system | a remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention |
| 3(43) | post-remote biometric identification system | a remote biometric identification system other than a real-time remote biometric identification system |
Service
FAQ
FAQs: all you need to know about AI
Artificial Intelligence: Questions and Answers (European Commission)
Contact details
Use our online form to contact us if you haven't found the answer to your question (in German)
Events
AI-Café (in German)
Links and Downloads
Bundesnetzagentur's AI compliance compass (in German)
Hinweispapier: KI-Kompetenzen nach Artikel 4 KI-Verordnung (pdf / 357 KB) (in Geman)
EU guidelines on the definition of an artificial intelligence system
General-Purpose AI Code of Practice
Digital transformation among SMEs (in German)