YOUR
Search

    09.10.2023

    Artificial Intelligence Act – an overview


    1. Introduction

    In April 2021, the European Commission put forward a proposal for a regulation on Artificial Intelligence (hereinafter, “Artificial Intelligence Act or “AIA”).

     

    The AIA intends – inter alia – to ensure that the use of artificial intelligence (AI) systems, whatever the area of use (e.g., health care, education, finance, energy, etc.), takes place in accordance with the founding values of the European Union and within a defined regulatory framework.

     

    To that end, the AIA contains particularly strict provisions on data quality, transparency, human oversight and accountability arising from use of AI systems.

     

    The AIA also outlines standards for so-called generic AI, which are systems that can be used for different purposes with varying degrees of risk. Such technologies include, for example, generative AI systems with large language models such as ChatGPT.

     

    In June 2023 the European Parliament adopted its negotiating position on the AIA, kicking off the trilogue between the Commission, the Council and the European Parliament, which will conclude with the issuance of the final text. The final version of the regulation is expected to be published by the end of 2023 and to come into force in 2026, with a two-year grace period (like in the case of the GDPR). Such period will allow the recipients of the AIA’s provisions to (hereinafter, the “Obligated Parties”) to adapt to the changes provided for by the regulation before it becomes effective.

     

    Therefore, please note that this memorandum refers to a piece of legislation still subject to a legislative process that might lead to several changes in its contents.

     

     

    2. Application

    The AIA will for the first time introduce a definition of “Artificial Intelligence” to be used to identify which tools will fall within the scope of the AIA. Although such definition will not be final until the act is passed, it is already foreshadowed, inter alia, that many software tools classified as “medical devices”, even those that are already on the market, shall be deemed based on AI systems and therefore required to meet the requirements of the AIA (e.g., devices to provide assistance to physicians in diagnosing diseases).

     

    The AIA will also set out clear definitions for the different players involved in AI. This means that all parties involved in the development, use, import, distribution or production of AI models will be held accountable for various reasons. In addition, the AIA will also apply to providers and users of AI systems located outside the EU, if the output produced by the system is intended for use in the EU.

     

     

    3. What will be needed to comply with the AIA

    3.1. Phase 1: inventory and analysis of AI Models

     

    The first step that Obligated Parties must take to comply with the AIA shall be that of assessing whether they use systems that qualify as “artificial intelligence”. Such models may already be in use, in development or still in the process of being received from third-party providers.

     

    3.2. Phase 2: classification of AI risk

     

    The AIA takes a risk-based approach: in other words, in order to understand what obligations one is subject to, it is necessary to understand what category of risk the AI system used belongs to.

     

    So, based on the cataloguing of AI models, the Obligated Parties will have to proceed with the classification of models based on risk. In this regard, a risk assessment has been proposed in two further sub-steps: a first classification to be carried out through the risk categories provided in the AIA; a second assessment will on the other hand concern the impact of AI on fundamental rights, which each AI user will be required to carry out independently according to a criterion to be established internally (Fundamental Rights Impact Assessment).

     

    As for the classification of models according to risk, the AIA distinguishes various categories:

     

     

     

     

    3.2.1. Unacceptable risk

     

    In defining the classification, the AIA gives some examples of models that pose an unacceptable risk and are therefore prohibited. In particular, the following are prohibited:

    • real time” and “a posteriori” remote biometric identification systems in publicly accessible spaces (with an exception for law enforcement authorities using biometric identification ex post for the prosecution of serious crimes, subject to judicial authorization);
    • social scoring systems (involving ranking of people based on their behaviour or characteristics);
    • the use of cognitive behavioural manipulation techniques targeting specific categories of vulnerable people or groups (e.g., talking toys for children);
    • predictive policing systems based on profiling, location or past criminal behaviour;
    • emotion recognition systems used in the areas of law enforcement, border management, workplaces and educational institutions;
    • untargeted extraction of biometric data from the Internet or CCTV footage to create facial recognition databases.

     

     

    3.2.2. High risk

     

    Similarly, Annex III to the AIA provides a risk of high-risk systems  under Article 6(2) that must comply with multiple requirements, undergo conformity assessment (to be completed before the model is placed on the market) and, in any event, be subject to continuous surveillance even after being placed on the market.

     

    High-risk AI systems fall into two categories:

    • AI systems intended for use as a safety component of a product, or which are themselves a product, covered by the European Union harmonization legislation listed in Annex II, such as toys, automobiles, medical devices and in vitro medical devices (referred to in Regulations (EU) 2017/745 and 2017/746);
    • the product whose component is an AI system – or the AI system itself as a product – subject to third-party conformity assessment for the purpose of placing on the market or putting into service under the EU harmonization legislation listed in Annex II. In addition, there are certain AI systems falling within eight specific areas, which will need to be registered in an EU database:
      • biometric identification and categorisation of natural persons;
      • management and operation of critical infrastructure; education and vocational training;
      • employment, worker management (e.g., for the recruitment or evaluation of employees) and access to self-employment;
      • access to and use of essential private and public services and benefits (e.g., AI systems intended to be used to make decisions or materially influence decisions on the eligibility of natural persons for health and life insurance);
      • law enforcement;
      • migration management, asylum and border control;
      • assistance in legal interpretation and enforcement of the law.

    AI systems used to influence voters and the outcome of elections, as well as recommender systems used by social media platforms designated as “large online platforms” under Regulation (EU) 2022/2065 (Digital Services Act) have been added to the high-risk list - originally proposed by the Commission -.

     

    Among the requirements applying to high-risk systems are:

    • an appropriate risk management system;
    • capacity to register activities (such as log registration);
    • human oversight;
    • adequate governance of data used for training, tests and validation;
    • transparency and explainability;
    • checks ensuring accuracy, robustness and cybersecurity.

    For example, as concerns medical devices, industry regulations already provide for some conformity assessment procedures. In this regard, the AIA requires that AI conformity be assessed through conformity assessment conducted under the industry regulations, to avoid overlapping procedures. Consistently, the issuance of an EC certificate of conformity will attest to the AI system’s compliance with both the Medical Device Regulation and the AIA. So, Article 59 of Regulation (EU) 2017/745 and Article 54 of Regulation (EU) 2017/746 shall apply in any event.

     

    3.2.3. Foundation models (or base models)

     

    Foundation models, in the form of generative AI systems (such as, by way of example, ChatGPT) and basic AI systems, will also have specific regulatory rules. Generative and foundation AI systems can both be considered general-purpose AI because they can perform a variety of tasks.

     

    For such models, the AIA imposes higher transparency requirements, whereby:

    • those who develop generative AI will have to make it obvious to the user, in the end result, that the content has been generated by the AI (e.g. this will allow the distinction between “deep fakes” and real images);
    • Obligated Parties will have to provide guarantees against the generation of illegal content;
    • Obligated Parties will have to document and make available to the public a sufficiently detailed summary of the use of training data protected by copyright law.

    Providers of “basic AI” models will have to assess and mitigate the possible risks associated with their models (for health, safety, fundamental rights, the environment, democracy and rule of law) and register them in the EU database before they are placed on the market.

     

     

     

    3.2.4. Limited or minimal risk

     

    The remaining AI applications are considered to be of limited or minimal risk. Such systems must meet certain transparency requirements, which allow users to understand that they are interacting with an AI and to make informed decisions. Examples of such systems are: chat bots, “deep fakes” image or video generators (when they are not considered high risk), translation or weather forecasting systems.

     

    AI systems with minimal risk are, for instance, spam filters or video games.

     

     

     

    3.3. Comformity assessment (pre-marketing) and enforcement (post-marketing)

     

    Providers of high-risk AI systems shall ensure that such a system undergoes a conformity assessment procedure in accordance with Article 43 of the AIA before it is placed on the market or put into service.

     

    If, following such assessment, the AI is deemed compliant with statutory requirements, providers shall draw up an EU declaration of conformity in accordance with Article 48 and affix the CE marking.

     

    The AIA requires high-risk AI providers to monitor the performance of their systems even after market launch, given that AI evolves as it receives new inputs. During such monitoring, providers will be required to carry out continuous analysis of the devices, software and other AI systems interacting with the AI system, taking into account restrictions arising from data protection, copyright and competition law.

     

    The Commission will have to provide a template for post-market monitoring plans within one year of the entry into force of the AIA.

     

     

    4. Data and data governance

    The AIA also addresses concerns about the quality of data used to create AI systems. Therefore, the regulation provides, among other things:

    • rules on how “training datasets” (including validation and test datasets) are to be designed and used (requiring datasets to be “relevant, representative, free of errors and complete”;
    • rules on data preparation, including labelling, cleaning, and aggregation;
    • exemption from those GDPR rules restricting the collection of sensitive personal data for the purpose of correcting algorithm bias.

     

    5. Human Oversight

    According to the AIA, AI systems must be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which the system is in use.

     

    It is not simply a matter of transparency of the operation of the AI system (as in the GDPR). Such an obligation is broader and should allow, for instance, the “human supervisor” to detect anomalies in order to be able to correctly interpret the results of the system. An explicit objective is to prevent or minimise risks to fundamental rights.

     

    If a high-risk system is operated by a “user” rather than the original provider (e.g., a private company purchases and installs an automated recruitment system), the allocation of liabilities is very different in the AIA compared to the GDPR. In the GDPR, the company would be the “data controller” and thus the party subject to the duties. In the AIA, the manufacturer of the AI has sole responsibility for obtaining the conformity assessment before the system is placed on the market and for implementing “human supervision” in a way that is appropriate for its use by the user. Otherwise, should the user substantially modify the system, he/she will become the new “provider” with all the associated certification duties.

     

     

    6. Innovation and research

    Exemptions for research activities and for AI components provided under open-source licences have recently been proposed by the Parliament. Such exemptions include the promotion of regulatory sandboxes (temporary exemption from the relevant regulations during a testing period), provided they are set up by public authorities for the purpose of testing AI systems before they are placed on the market or otherwise put into service.

     

     

    7. Liability and artificial intelligence

    The AIA is part of a three-pillar package proposed by the European Commission to support AI in Europe. The other pillars include an amendment of the Product Liability Directive (PLD) and a new AI Liability Directive (AILD). While the AIA focuses on safety and ex ante protection of fundamental rights, the other two pillars deal with damage caused by AI systems.

     

    Failure to comply with the AIA could trigger, depending on the level of risk, different degrees of relief from the burden of proof - as to the PLD - for no-fault product liability claims - as to the AILD - for any other (fault-based) claims. The amendment of the PLD and the AILD will still require a long approval process because they have not yet been approved by the European Parliament and, being directives, they will have to be transposed at national level.

     

     

    8. European artificial intelligence board

    The bill also aims at establishing a “European Artificial Intelligence Board”, which should oversee the implementation of the regulation and ensure its uniform application throughout the EU. The Board should be responsible for providing opinions and recommendations on arising issues and for providing guidance to national authorities.

     

     

    9. Penalties

    At present, the penalties for non-compliance with the AIA are significant: they currently amount to up to EUR 40 million or up to 7% of annual global turnover, depending on the severity of the breach.

     

     

    10. Conclusions

    The approach adopted by the EU through the AIA is to strike a balance between innovation and the need for protection. From this perspective, Obligated Parties using AI must learn to navigate their way through the (still evolving) regulatory provisions, aiming for innovation but always complying with the legal framework. In particular, Obligated Parties should evaluate the potential impact of the AIA on their activities and assess whether their modus operandi complies with the principles and provisions that will come into force. Despite the two-year grace period before the actual implementation of the AIA, it is necessary to act promptly, given that the development of AI systems can take a very long time.

     

    Therefore, a proactive approach by the Obligated Parties (which may consist in the monitoring of regulatory developments) is crucial not only for future compliance but also to mitigate the risks of complaints or litigation with the various contractual parties.

    Number of Partners grows in ADVANT Nctm wi…
    ADVANT Nctm strengthens its corporate structure with the appointment of Roberto …
    Read more
    Artificial Intelligence Act – an overview
    Introduction In April 2021, the European Commission put forward a proposal for a regulation on Artificial Intelligence (hereinafter, “Artificial Intelligence Act” or “AIA”). The AIA intends –…
    Read more
    Ukraine crisis – Sanctions (updated as of August 7, 2023)
    This memorandum is not intended to be exhaustive and has the sole purpose of providing a preliminary overview of the sanctions imposed, and in the process of being imposed, against Russia, with a particula…
    Read more
    ADVANT Nctm strengthens its corporate structure with 3 new promotions
    ADVANT Nctm strengthens its corporate structure with the appointment of  Jacopo Arnaboldi, Miranda Cellentani and Eleonora Parrocchetti as Equity Partners in its Milan and Rome offices. Promotion is par…
    Read more
    Italian Data Protection Authority approves Code of Conduct on Telemarketing: major novelties and practical impli…
    Pursuant to Article 40 of the GDPR, various trade and consumer associations drafted a Code of Conduct on Telemarketing and Telesales. The Code, approved by the Italian Data Protection Authority (“Garante”)…
    Read more
    Ukraine Crisis - Sanctions (update 7 march 2023)
    This memorandum is not intended to be exhaustive and has the sole purpose of providing a preliminary overview of the sanctions imposed, and in the process of being imposed, against Russia, with a particula…
    Read more
    Ukraine crisis – Sanctions (update 10 January 2023)
    This memorandum is not intended to be exhaustive and has the sole purpose of providing a preliminary overview of the sanctions imposed, and in the process of being imposed, against Russia, with a parti…
    Read more
    The new adequacy decision to simplify data transfers to the US
    With the new adequacy decision of the European Commission, the transfer of personal data to the US will soon undergo significant and important developments. As in the best Netflix series, we start with…
    Read more
    Ukraine crisis – Sanctions
    This memorandum is not intended to be exhaustive and has the sole purpose of providing a preliminary overview of the sanctions imposed, and in the process of being imposed, against Russia, with a particula…
    Read more