YOUR
Search

    03.03.2025

    ADVANT Pulse No. 4: Your Labour & Employment News


    As artificial intelligence (AI) continues to transform workplaces and is becoming increasingly integrated into employment processes such as hiring, employee monitoring, and employee evaluation. When using AI, companies already need to comply with regulation including data protection and labor laws. However, they will soon also need to ensure compliance with another regulatory framework – the EU AI Act. The AI Act, published in August 2024, categorizes AI systems into risk levels, with high-risk systems subject to the most stringent requirements. With regard to these provisions, it will enter into force in August next year. 

    High-Risk AI systems under the EU AI Act In a employment context, the new regulation concerns foremost:
    a) AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates
    b) AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behavior or personal traits or characteristics or to monitor and evaluate the performance and behavior of persons in such relationships.

    The most important requirements for high-risk AI systems at a glance 

    The providers of high-risk AI systems bear the following obligations: 

    • Quality and risk management
    • Technical documentation, record-keeping and logging obligations
    • Consideration of accuracy, robustness, cybersecurity and accessibility during development
    • Transparency and information obligations
    • Registration in the relevant EU database and cooperation with the competent authority

    Those who only deploy of high-risk AI systems generally have to fulfil fewer requirements than providers. However, there may be scenarios in which they can be subject to the same extensive obligations as the providers of high-risk AI systems.

    Looking ahead, the AI Liability is poised to complement the EU AI Act. It aims to streamline legal pathways for individuals harmed by AI systems, including in employment related situations. However, the legislative process is still in its early stages and only rarely does a directive emerge from the legislative process in the form in which it was presented by the EU Commission.

     

    Click here for the document

    Metadata, the Italian Data Protection Authority intervenes on the Extension of the Retention Period Beyond 21 Days
    With Decision No. 243 of April 29, 2025, the Italian Data Protection Authority (…
    Read more
    New digital accessibility obligations
    The requirement for compliance with accessibility obligations for digital servic…
    Read more
    NIS, ACN’s resolution on notification of sharing agreements
    ACN's Resolution No. 136118 of 10 April 2025 – Notification of agreements on the…
    Read more
    The European Commission’s Template on Training Data Transparency: First Guidelines for the AI Act
    Following the adoption of the AI Act (Reg. EU 2024/1689) on August 1, 2024, one …
    Read more
    NIS: Determinations Defining the Obligations Adopted — Information Update Deadline Set for May 31
    On April 15, 2025, the Italian National Cybersecurity Agency (“ACN”) published o…
    Read more
    NIS, so what now? Dates to watch out for
    On 16 October, Legislative Decree No. 138/2024 came into force, whereby Italy im…
    Read more
    ADVANT NCTM EXPANDS ITS LABOUR PRACTICE WITH NEW PARTNER PATRIZIO BERNARDO
    ADVANT Nctm announces Patrizio Bernardo as a new partner in the Labour departmen…
    Read more
    Artificial Intelligence Bill: first approval by the Senate
    A first analysis of the bill on artificial intelligence currently under review b…
    Read more
    The algorithm must remain under human supervision
    Interview with Fabio Coco for Plus24 Il Sole 24 Ore Artificial intelligence is …
    Read more