Italy has become the first country in the European Union to pass a national law on AI before the EU’s own AI Act takes effect. The law, approved by the Senate in the middle of last month, builds on discussions that began in April las year. Impact Newswire reports that the Italian government wants to create more elaborate rules for both public and private AI use, focusing on accountability, ethics and transparency.
The law includes 28 articles that define how AI can be used in different sectors. It also introduces rules for protecting minors under 14, requiring parental consent before any data linked to them can be processed. Italian lawmakers say the goal is to make AI systems fair and safe for citizens while allowing companies to keep innovating responsibly.
According to Giulio Uras:
“The Italian government’s effort has been both remarkable and, for once, genuinely timely. It sets a clear benchmark for EU countries aiming to complement the AI Act at the national level. Its approach is founded on three key pillars: innovation, transparency, and criminal protection.
On innovation, the Italian law conveys a clear and forward-looking policy direction. By authorising the secondary use of pseudonymised personal data for research purposes, it adopts a functional and proportionate regulatory model designed to foster scientific and technological development. This approach implicitly acknowledges that Europe’s ability to compete in the global AI landscape depends on avoiding an overly dogmatic interpretation of fundamental rights (particularly in the field of data protection) that could unduly restrict legitimate research and innovation.
As for transparency, the Italian law is more debatable. The law extends disclosure obligations across several sectors (including employment and intellectual professions) without following the AI Act’s risk-based approach. Such a broad rule may overburden low-risk systems and, paradoxically, stifle innovation.
The criminal protection provisions yield mixed results. The new offense addressing deepfakes(Art. 612-quater of the Italian Criminal Code) effectively targets a growing threat. More broadly, introducing criminal law safeguards was undoubtedly necessary, as it reinforces protection against the unlawful use of AI to obtain unfair profits or inflict harm. However, criminal provisions are effective only when they can be concretely enforced. In this regard, the drafting technique adopted for the new aggravating circumstance (Art. 61, no. 11-decies of the Italian Criminal Code) raises issues of legal clarity and operational effectiveness, which may ultimately limit its enforceability in practice.
The real challenge for EU Member States that wish to follow Italy’s example will be to do so without adding unnecessary layers of bureaucracy or new burdens on businesses. Otherwise, the drive for innovation risks being lost in translation.”
Full article published in TechRound.