In April 2021, the European Commission put forward a proposal for a regulation on Artificial Intelligence (hereinafter, “Artificial Intelligence Act” or “AIA”).
The AIA intends – inter alia – to ensure that the use of artificial intelligence (AI) systems, whatever the area of use (e.g., health care, education, finance, energy, etc.), takes place in accordance with the founding values of the European Union and within a defined regulatory framework.
To that end, the AIA contains particularly strict provisions on data quality, transparency, human oversight and accountability arising from use of AI systems.
The AIA also outlines standards for so-called generic AI, which are systems that can be used for different purposes with varying degrees of risk. Such technologies include, for example, generative AI systems with large language models such as ChatGPT.
In June 2023 the European Parliament adopted its negotiating position on the AIA, kicking off the trilogue between the Commission, the Council and the European Parliament, which will conclude with the issuance of the final text. The final version of the regulation is expected to be published by the end of 2023 and to come into force in 2026, with a two-year grace period (like in the case of the GDPR). Such period will allow the recipients of the AIA’s provisions to (hereinafter, the “Obligated Parties”) to adapt to the changes provided for by the regulation before it becomes effective.
Therefore, please note that this memorandum refers to a piece of legislation still subject to a legislative process that might lead to several changes in its contents.
The AIA will for the first time introduce a definition of “Artificial Intelligence” to be used to identify which tools will fall within the scope of the AIA. Although such definition will not be final until the act is passed, it is already foreshadowed, inter alia, that many software tools classified as “medical devices”, even those that are already on the market, shall be deemed based on AI systems and therefore required to meet the requirements of the AIA (e.g., devices to provide assistance to physicians in diagnosing diseases).
The AIA will also set out clear definitions for the different players involved in AI. This means that all parties involved in the development, use, import, distribution or production of AI models will be held accountable for various reasons. In addition, the AIA will also apply to providers and users of AI systems located outside the EU, if the output produced by the system is intended for use in the EU.
3.1. Phase 1: inventory and analysis of AI Models
The first step that Obligated Parties must take to comply with the AIA shall be that of assessing whether they use systems that qualify as “artificial intelligence”. Such models may already be in use, in development or still in the process of being received from third-party providers.
3.2. Phase 2: classification of AI risk
The AIA takes a risk-based approach: in other words, in order to understand what obligations one is subject to, it is necessary to understand what category of risk the AI system used belongs to.
So, based on the cataloguing of AI models, the Obligated Parties will have to proceed with the classification of models based on risk. In this regard, a risk assessment has been proposed in two further sub-steps: a first classification to be carried out through the risk categories provided in the AIA; a second assessment will on the other hand concern the impact of AI on fundamental rights, which each AI user will be required to carry out independently according to a criterion to be established internally (Fundamental Rights Impact Assessment).
As for the classification of models according to risk, the AIA distinguishes various categories:
3.2.1. Unacceptable risk
In defining the classification, the AIA gives some examples of models that pose an unacceptable risk and are therefore prohibited. In particular, the following are prohibited:
3.2.2. High risk
Similarly, Annex III to the AIA provides a risk of high-risk systems under Article 6(2) that must comply with multiple requirements, undergo conformity assessment (to be completed before the model is placed on the market) and, in any event, be subject to continuous surveillance even after being placed on the market.
High-risk AI systems fall into two categories:
AI systems used to influence voters and the outcome of elections, as well as recommender systems used by social media platforms designated as “large online platforms” under Regulation (EU) 2022/2065 (Digital Services Act) have been added to the high-risk list - originally proposed by the Commission -.
Among the requirements applying to high-risk systems are:
For example, as concerns medical devices, industry regulations already provide for some conformity assessment procedures. In this regard, the AIA requires that AI conformity be assessed through conformity assessment conducted under the industry regulations, to avoid overlapping procedures. Consistently, the issuance of an EC certificate of conformity will attest to the AI system’s compliance with both the Medical Device Regulation and the AIA. So, Article 59 of Regulation (EU) 2017/745 and Article 54 of Regulation (EU) 2017/746 shall apply in any event.
3.2.3. Foundation models (or base models)
Foundation models, in the form of generative AI systems (such as, by way of example, ChatGPT) and basic AI systems, will also have specific regulatory rules. Generative and foundation AI systems can both be considered general-purpose AI because they can perform a variety of tasks.
For such models, the AIA imposes higher transparency requirements, whereby:
Providers of “basic AI” models will have to assess and mitigate the possible risks associated with their models (for health, safety, fundamental rights, the environment, democracy and rule of law) and register them in the EU database before they are placed on the market.
3.2.4. Limited or minimal risk
The remaining AI applications are considered to be of limited or minimal risk. Such systems must meet certain transparency requirements, which allow users to understand that they are interacting with an AI and to make informed decisions. Examples of such systems are: chat bots, “deep fakes” image or video generators (when they are not considered high risk), translation or weather forecasting systems.
AI systems with minimal risk are, for instance, spam filters or video games.
3.3. Comformity assessment (pre-marketing) and enforcement (post-marketing)
Providers of high-risk AI systems shall ensure that such a system undergoes a conformity assessment procedure in accordance with Article 43 of the AIA before it is placed on the market or put into service.
If, following such assessment, the AI is deemed compliant with statutory requirements, providers shall draw up an EU declaration of conformity in accordance with Article 48 and affix the CE marking.
The AIA requires high-risk AI providers to monitor the performance of their systems even after market launch, given that AI evolves as it receives new inputs. During such monitoring, providers will be required to carry out continuous analysis of the devices, software and other AI systems interacting with the AI system, taking into account restrictions arising from data protection, copyright and competition law.
The Commission will have to provide a template for post-market monitoring plans within one year of the entry into force of the AIA.
The AIA also addresses concerns about the quality of data used to create AI systems. Therefore, the regulation provides, among other things:
According to the AIA, AI systems must be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which the system is in use.
It is not simply a matter of transparency of the operation of the AI system (as in the GDPR). Such an obligation is broader and should allow, for instance, the “human supervisor” to detect anomalies in order to be able to correctly interpret the results of the system. An explicit objective is to prevent or minimise risks to fundamental rights.
If a high-risk system is operated by a “user” rather than the original provider (e.g., a private company purchases and installs an automated recruitment system), the allocation of liabilities is very different in the AIA compared to the GDPR. In the GDPR, the company would be the “data controller” and thus the party subject to the duties. In the AIA, the manufacturer of the AI has sole responsibility for obtaining the conformity assessment before the system is placed on the market and for implementing “human supervision” in a way that is appropriate for its use by the user. Otherwise, should the user substantially modify the system, he/she will become the new “provider” with all the associated certification duties.
Exemptions for research activities and for AI components provided under open-source licences have recently been proposed by the Parliament. Such exemptions include the promotion of regulatory sandboxes (temporary exemption from the relevant regulations during a testing period), provided they are set up by public authorities for the purpose of testing AI systems before they are placed on the market or otherwise put into service.
The AIA is part of a three-pillar package proposed by the European Commission to support AI in Europe. The other pillars include an amendment of the Product Liability Directive (PLD) and a new AI Liability Directive (AILD). While the AIA focuses on safety and ex ante protection of fundamental rights, the other two pillars deal with damage caused by AI systems.
Failure to comply with the AIA could trigger, depending on the level of risk, different degrees of relief from the burden of proof - as to the PLD - for no-fault product liability claims - as to the AILD - for any other (fault-based) claims. The amendment of the PLD and the AILD will still require a long approval process because they have not yet been approved by the European Parliament and, being directives, they will have to be transposed at national level.
The bill also aims at establishing a “European Artificial Intelligence Board”, which should oversee the implementation of the regulation and ensure its uniform application throughout the EU. The Board should be responsible for providing opinions and recommendations on arising issues and for providing guidance to national authorities.
At present, the penalties for non-compliance with the AIA are significant: they currently amount to up to EUR 40 million or up to 7% of annual global turnover, depending on the severity of the breach.
The approach adopted by the EU through the AIA is to strike a balance between innovation and the need for protection. From this perspective, Obligated Parties using AI must learn to navigate their way through the (still evolving) regulatory provisions, aiming for innovation but always complying with the legal framework. In particular, Obligated Parties should evaluate the potential impact of the AIA on their activities and assess whether their modus operandi complies with the principles and provisions that will come into force. Despite the two-year grace period before the actual implementation of the AIA, it is necessary to act promptly, given that the development of AI systems can take a very long time.
Therefore, a proactive approach by the Obligated Parties (which may consist in the monitoring of regulatory developments) is crucial not only for future compliance but also to mitigate the risks of complaints or litigation with the various contractual parties.