YOUR
Search

    18.03.2026

    Europe and Generative AI: the Parliament Charts the Course


    On 10 March, the European Parliament adopted a resolution — a non-binding political act, but one that signals the direction of European regulation — on the relationship between generative artificial intelligence and content protection. The resolution forms part of a regulatory framework already in motion: the AI Act, which came into force in August 2024 and is being progressively implemented until 2027, has introduced transparency obligations for providers of general-purpose AI (GPAI) models.

     

    The Problem

    Generative AI systems are trained on vast quantities of content gathered from the internet without authorisation and without compensation for those who produced it. The result, according to the European Parliament, is that authors find themselves competing in the market with systems trained on their own content, without having been asked for consent or receiving any compensation. This dynamic raises significant issues both in terms of authors' individual rights and the economic sustainability of the European cultural sector as a whole.

     

    Measures Requested of the Commission

    Transparency. AI providers should provide a detailed list of the protected content used in training — an obligation based on the principle of sufficient disclosure: the disclosure must be sufficiently detailed to allow rights holders to verify whether and how their content has been used. This measure builds on the provisions of the AI Act for GPAI models, which requires the publication of summaries of training data, whilst going a step further towards a significantly higher level of detail.

    No geographical loopholes. An AI system that uses protected content outside the EU should not be marketed in the European market. This principle reflects the extraterritorial approach already adopted by the AI Act, which applies to all systems placed on the European market regardless of where they were developed.

    Traceable crawlers. Those who collect data from the web should be identifiable by website operators and should keep detailed records of their activities.

    Digital watermarks. Rights holders would be able to mark their content; AI providers would be obliged to keep watermarks intact and to offer tools to detect them. This issue intersects with the provisions of the AI Act on the labelling of synthetic content and the automated detection of deepfakes.

    Right of exclusion. The Parliament proposes that rights holders should be able to exclude their content from model training, using standardised formats managed by the EUIPO. This would constitute an operational strengthening of the opt-out mechanism already provided for in the Digital Single Market (DSM) Directive, which the AI Act refers to but does not regulate in detail.

    Collective licences. The EUIPO could coordinate a sector-specific licensing system. For content already used without authorisation, fair and proportionate transitional remuneration is envisaged.

    Labelling. The proposal suggests introducing an obligation to distinguish content "generated by AI" from that "produced by a human being", with a code of good practice to be drawn up by the Commission — in line with the transparency obligations already introduced by the AI Act for systems that generate synthetic content or interact directly with users.

     

    A Fundamental Principle

    Content protection should remain anchored to human authorship: content generated entirely by AI would not be protectable and would remain in the public domain.

     

    Why It Matters Now

    Read alongside the AI Act, the resolution contributes to shaping a regulatory mosaic still under construction. The AI Act laid the foundations — transparency obligations, risk management, model governance — but deferred the issue of content protection to subsequent developments. This resolution indicates the form those developments might take. For those working in the cultural or technology sectors, now may be the time to assess their practices in light of a regulatory framework which, between existing obligations and measures still being defined, appears set to evolve significantly in the years ahead.