Gen­er­al-pur­pose AI mod­els

Article 3, point (63) of the AI Act defines a general-purpose AI model as:

  • “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale,
  • that displays significant generality
  • and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications,
  • except AI models that are used for research, development or prototyping activities before they are placed on the market”.

In short, these general-purpose AI (GPAI) models are characterised by the fact that – as their name says – they can be deployed for various purposes. Recital 97 of the AI Act describes this as “generality” and the “capability to competently perform a wide range of distinct tasks”. Unlike specialised AI models, GPAI models are not trained to perform just one special, narrowly defined task such as identifying faults in components on a production line, but are trained with very large amounts of data without a specific, pre-defined task (self‑supervised training) and can therefore be integrated and deployed for various purposes in downstream systems.

Large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content, such as in the form of text, audio, images or video, that can readily accommodate a wide range of distinctive tasks” (recital 99 AI Act).

AI systems that integrate a GPAI model and are capable of serving a variety of purposes are therefore referred to as GPAI systems. GPAI systems can be used directly or integrated into other AI systems, including high-risk AI systems, in which case the requirements and obligations for AI systems or high-risk AI systems apply accordingly.

The obligations set out in the AI Act for GPAI models apply as from 2 August 2025. The AI Act makes a distinction between “normal” GPAI models and GPAI models with systemic risk.

“Normal” GPAI models

The characteristics defined above describe “normal” GPAI models. These models are subject to the following obligations in the AI Act:

Providers must ensure that their models are trustworthy and safe. Providers established in third countries and wanting to place a GPAI model on the Union market are required by Article 54(1) AI Act to first appoint, by written mandate, an authorised representative established in the Union.

Providers must generally comply with the information and documentation obligations set out in Article 53 of the AI Act, including:

  • drawing up and updating technical documentation,
  • documenting the capabilities and limitations of the GPAI model,
  • disclosing a detailed summary of the training data,
  • putting in place a strategy to deal with copyrights (such as protecting training material).

Guidelines and codes of practice

The European AI Office is responsible for the supervision of GPAI models. The AI Office is planning to publish guidelines and codes of practice for GPAI models before the obligations take effect in August 2025. While the guidelines and guidance will not be binding, they will provide valuable insights into the European Commission’s interpretation of the obligations. The first code of practice has been published: Code of Practice for General-Purpose AI Models.

The code is complemented by guidelines and a template for documenting the data used to train the GPAI model.

Mastodon