Italy has become the first country in the European Union to pass a national law on AI before the EU’s own AI Act takes effect. The law, approved by the Senate in the middle of last month, builds on discussions that began in April las year. Impact Newswire reports that the Italian government wants to create more elaborate rules for both public and private AI use, focusing on accountability, ethics and transparency.
The law includes 28 articles that define how AI can be used in different sectors. It also introduces rules for protecting minors under 14, requiring parental consent before any data linked to them can be processed. Italian lawmakers say the goal is to make AI systems fair and safe for citizens while allowing companies to keep innovating responsibly.
What Does The New Law Cover?
The new AI framework assigns oversight to two main authorities: the Agency for Digital Italy as well as the National Cybersecurity Agency. These bodies will monitor AI systems, manage inspections and promote secure use in the different sectors. In health, the law allows the reuse of anonymised patient data for scientific research. It also introduces criminal penalties for those who share fake or misleading AI-generated images or videos.
To make sure there is fair treatment at work, employers must let their staff know when AI tools are used to manage or evaluate them. The law also requires that AI respects privacy and prevents discrimination. Public administrations are advised to use data centres based in Italy when handling sensitive or strategic data.
What Are The Top Priorities Of The Italian Framework?
The new framework is built on these, in a nutshell:
- AI must respect privacy, human rights and fairness.
- Organisations must very thoroughly explain how algorithms make decisions.
- AI should be developed responsibly without restricting creativity.
The law will soon be published in the Italian Official Gazette. Authorities will then start setting up systems to train staff, map AI tools in use, and check their compliance. Organisations are also expected to create internal AI policies and monitor their suppliers’ systems.
While it is too early to measure its long-term effects, Italy’s early action has drawn attention across Europe. Many governments are expected to follow closely as the country puts its new framework into practice.
How Can Other Nations Follow?
Experts share their thoughts on how other countries can take after Italy when it comes to regulation of AI:
Our Experts:
- Mikhail Fedorinin, Founder, Albato
- Marcus Wolter, Partner & Global Director, Corporate Practice
- Dan Herbatschek, CEO, Ramsey Theory Group
- Gökçen Tapkan, Director of Data Research & European Commission Expert Evaluator, Black Kite
- Bob Bilbruck, CEO, Captjur
- Giulio Uras, Counsel, ADVANT Nctm
- Martin Davies, Senior Audit Alliance Manager, Drata
- Jonny Murphy-Campbell, Commercial Director, Resolvable
- Peter Wood, CTO, Spectrum Search
- James Kirkham, Founder, ICONIC.
- Tomy Lorsch, Founder & CEO, ComplexChaos
Mikhail Fedorinin, Founder, Albato
![]()
“Italy’s move to roll out AI regulations is raising the bar for everyone building tech globally. These standards are designed to ensure that innovation goes hand in hand with responsibility — and that’s the only way AI can bring lasting value to businesses and society.
“At Albato, we’ve built our AI tools already with these principles in mind: transparency, accountability, and data protection. Honestly, if you’re developing AI ethically, you’re already doing most of what these new Italian rules ask for.
“I’m sure more countries will jump on board soon, but the key is to keep rules flexible and practical so they encourages innovation rather than limits it. If we all head in the same direction, AI will keep getting more and more trustworthy for both users and developers.”
Marcus Wolter, Partner & Global Director, Corporate Practice
![]()
“As far as I am aware, Italy is the first EU member state to enact such a comprehensive national AI framework. It is aligning Italian law with the EU AI Act and is now in force as of 10 October 2025. It assigns national oversight (Agency for Digital Italy and the National Cybersecurity Agency), criminalises harmful deepfakes, sets sector rules (healthcare, education, workplace, justice), and couples governance with investment measures.
“For every other country following they need to align with the EU framework and its tiered approach to different risk categories to avoid friction between legislative layers, they need to identify lead digital or standards authorities and support them sufficiently while also enable sector specific authorities (e.g. financial and health care regulators) to support oversight. More than creating a lot of new regulation adapting existing regulation like the criminal codes should have priority.
“Most importantly, EU and national legislators and regulators, need to be careful to not create additional red tape squashing innovation. They should maintain a risk-based, standards-driven regime that relies on harmonised technical standards, sandboxes, and clear conformity templates. Compliance needs to be driven by predictable, evidence-based procedures rather than new bureaucracy. We also need fast-track mechanisms that let innovators test, iterate, and scale responsibly without redundant filings.”
Dan Herbatschek, CEO, Ramsey Theory Group
![]()
“Italy’s new national AI law makes it the first EU member state to implement – but it complements the EU AI Act, it does not replace it. It will set national governance guidelines and expands on child-safety initiatives and will make harmful deepfakes a criminal offense.
“I think the basic first step for countries who want to follow – establish one national agency that acts as the primary coordinator and then have a collaboration of supportive regulators who can also weigh in and advise. In Italy, the Digital and Cybersecurity agencies led the way but finance, health and other agencies were continually briefed. This sharing of progress and data is important since AI regulation crosses over into healthcare, employment, justice, education, public administration and more, so there should be guardrails in place for each sector- but with one agency taking the lead.”
Gökçen Tapkan, Director of Data Research & European Commission Expert Evaluator, Black Kite
![]()
“Italy just did something remarkable. When I first heard about this, I thought another framework that is restrictive but no this law is proactive and enabling. While most countries are still debating what to ban, Italy is giving doctors, lawyers, judges, public administrators, and artists a practical framework for making the most of the technology while ensuring that it remains firmly under human control. In this law, AI can be used to support diagnosis and treatment as long as the medical team makes the final decision and the patient is informed; Lawyers and consultants can leverage AI tools to serve clients better – they just need to be transparent about it.
“Judges can use AI to analyse case law – but human judgment stays sacred. I particularly like this “Works of human ingenuity of a creative nature which belong to literature,……..are protected even where created with the aid of artificial intelligence tool” which means that the human creativity that we are all looking for is still protected even if the tool used is artificial intelligence.
“As you understand, the Italian model is sector-specific, hence practical. Citizens, patients, clients, workers and users know what is permitted, what is expected and who is accountable. This level of clarity also sends a message that the government is a partner, not just a referee. They are actually putting money behind it, 1 billion euros, in AI, cybersecurity and telecom startups. It is equal to saying, “we trust this, we believe in it, and we are here to help you build it the right way”.
“From my earlier experience as a consultant in privacy, I would add that clear guidelines and ministerial-level decrees will be essential to avoid ambiguity, especially in hospitals and for companies operating across borders..We have EU AI Act which is a Europe-wide, horizontal, high-level framework, while Italy gives a practical playbook to apply in day-to-day activities.That is precisely why other countries should adopt a similar approach- not just to follow but to align and make adoption simpler and safer.”
Bob Bilbruck, CEO, Captjur
![]()
“I don’t think other countries should follow Italy. I think innovation needs less policy and regulation and it needs the ability to grow. We are in the very early stages of an AI revolution and too much regulation will kill innovation. The way Europe is approaching cutting edge innovative technologies like AI is concerning to me. It’s going to limit competition and innovation – this is the wrong approach and we need an open market to see the huge potential that AI can provide not only business but our personal lives.
“Limiting AI to a handful of large companies is the wrong approach and will lead to innovation in the shadows or worse a plutocracy that owns all the intelligence of the world; a dystopian vision where the world’s wealthiest individuals or corporations hold power not just because of their money, but also because they control the most critical sources of knowledge that are essential for progress, decision-making, and societal development. In this scenario, the “plutocracy” can influence or dominate global issues such as politics, economics, innovation, and even the future of humanity, simply because they control access to the world’s most valuable intellectual resources.
“In other words, it’s about a concentration of both wealth and knowledge in the hands of a small, powerful elite, which could lead to a situation where they have an outsized influence over global affairs. This could be seen as a threat to democracy or equitable progress, as those without access to such knowledge may be left behind.”
More from News
- UK Drivers Reveal The Best (And Worst) Parking Apps
- Tinder Adds Facial Recognition With Latest Feature, “Face Check”
- How Is YouTube Managing Your Screen Time?
- Is Pizza Hut Shutting Down? Experts Share How Giants Go Bust And Startups Rise
- Sendmarc Appoints Dan Levinson As Customer Success Director In North America
- Can AI Sandboxes Speed Up Growth and Public Services?
- ChatGPT Launches AI Powered Browser, Atlas
- Oura And Whoop Now Offer Blood Testing Services – Are We Entering The Era Of Commoditised Health Data?
Giulio Uras, Counsel, ADVANT Nctm
![]()
“The Italian government’s effort has been both remarkable and, for once, genuinely timely. It sets a clear benchmark for EU countries aiming to complement the AI Act at the national level. Its approach is founded on three key pillars: innovation, transparency, and criminal protection.
“On innovation, the Italian law conveys a clear and forward-looking policy direction. By authorising the secondary use of pseudonymised personal data for research purposes, it adopts a functional and proportionate regulatory model designed to foster scientific and technological development. This approach implicitly acknowledges that Europe’s ability to compete in the global AI landscape depends on avoiding an overly dogmatic interpretation of fundamental rights (particularly in the field of data protection) that could unduly restrict legitimate research and innovation.
“As for transparency, the Italian law is more debatable. The law extends disclosure obligations across several sectors (including employment and intellectual professions) without following the AI Act’s risk-based approach. Such a broad rule may overburden low-risk systems and, paradoxically, stifle innovation.
“The criminal protection provisions yield mixed results. The new offense addressing deepfakes (Art. 612-quater of the Italian Criminal Code) effectively targets a growing threat. More broadly, introducing criminal law safeguards was undoubtedly necessary, as it reinforces protection against the unlawful use of AI to obtain unfair profits or inflict harm. However, criminal provisions are effective only when they can be concretely enforced. In this regard, the drafting technique adopted for the new aggravating circumstance (Art. 61, no. 11-decies of the Italian Criminal Code) raises issues of legal clarity and operational effectiveness, which may ultimately limit its enforceability in practice.
“The real challenge for EU Member States that wish to follow Italy’s example will be to do so without adding unnecessary layers of bureaucracy or new burdens on businesses. Otherwise, the drive for innovation risks being lost in translation.”
Martin Davies, Senior Audit Alliance Manager, Drata
![]()
“The move made by the Italian NCA is a shift from discussion around regulation to actual enforcement across Europe. The alignment with the EU AI Act on human oversight and protecting minors and victims of harmful AI content is a positive signal.
“Other nations should take note, because when the EU AI Act does eventually come into force, organisations that have not embedded AI governance will be playing catch-up. We can learn from previous regulations like GDPR, to prove how proactivity and taking regulation seriously from the start is key to build trust as a ‘first-mover’.
“This move from Italy sends a clear signal that the era of AI’s ‘move fast and break things’ approach might be slowing down. This is especially important where it relates to the generation of near-identical ‘copycat’ content from tools being used to generate videos and audio. Without proper guardrails, this will become increasingly difficult to control.”
Jonny Murphy-Campbell, Commercial Director, Resolvable
![]()
“Italy’s regulation on the use of AI has been a welcomed development, and something the rest of the EU and the world can learn from. AI is only truly effective when it is also safe for humans, and human oversight is essential to ensure that generative AI still keeps people safe online.
“AI generated or manipulated content that causes harm is not acceptable, and while governments enforce stricter online safety measures, generative AI cannot be exempt from these. While the UK has taken steps to enforce child safety on AI, being the first country to criminalise AI child abuse tools, it’s important that regulation continues. As Italy aims to promote ‘human-centric, transparent and safe AI use’, with penalties for criminal activity using generative AI, the rest of the world can learn from these implementations.
“The UK must prioritise the ethical use of AI, ensuring human oversight and regulation of flaggable information that could cause harm, allowing users to protect their data on an opt-in basis as opposed to opt-out, and pursuing age guidelines and parental controls to protect children using AI.”
Peter Wood, CTO, Spectrum Search
![]()
“Italy’s AI regulations mark a crucial inflection point in global digital governance. By being the first to implement comprehensive national AI laws, Italy is signalling a move away from reactive policymaking toward proactive AI stewardship. Its framework is about sovereignty in algorithmic decision-making and setting ethical standards that reflect national values rather than relying solely on supranational directives.
“Other European nations are likely to view this as a proving ground for harmonising AI policy with the EU’s broader AI Act, while non-EU countries may observe how Italy balances innovation with accountability. If Italy demonstrates that robust regulation can coexist with accelerated AI growth, it will embolden governments worldwide to introduce similarly principled, enforceable frameworks without stifling enterprise or creativity.”
James Kirkham, Founder, ICONIC.
![]()
“I think what’s striking about Italy’s approach is there is dual motion going on, to protect citizens but also to green-light innovation, too. On one hand, parental consent is required for under-14s using AI which I love, not least in light of the horrific case of suicide in California where a couple are suing OpenAI over the death of their teenage son, alleging its chatbot, ChatGPT, encouraged him to take his own life. The example underlines why regulation must act proactively, not just retrospectively.
“As well as this, Italy ensuring deepfake misuse carries prison terms, and IP protections are strengthened. These are vital steps to get a handle on media and content which is out of control and the falsehoods and hallucinations swamping feeds making navigating news almost entirely impossible.
“Yet on the the other hand, Italy has committed nearly €1 billion to fuel AI across culture, health and manufacturing. I really think this is great and other countries should follow this template by embracing the same logic to guard the individual while freeing the maker. Because if AI is going to be more than a management tool and instead become the engine of a cultural renaissance, we must pair accountability with access. Culture doesn’t thrive in safe cages it does so when tools meet trust, and the weird and the niche can flourish too. That’s the future we should regulate for.”
Tomy Lorsch, Founder & CEO, ComplexChaos
![]()
“Italy’s decision to implement national AI regulations ahead of the EU may be visionary. However, by adding another bureaucratic framework on top of the EU AI Act, Italy risks turning governance into gridlock.
“First, the law centralises AI oversight within two government agencies rather than independent regulators. That’s a big flaw: it politicises supervision of one of the most disruptive technologies humanity has ever invented. Without proper checks and balances, we could see uneven enforcement and political leverage over innovation.
“Second, this move fragments the single market. The EU AI Act was designed precisely to avoid a patchwork of national rules. Italy’s stricter provisions introduce uncertainty and compliance costs that startups can’t absorb. It’s the opposite of what Europe needs to stay competitive.
“Third, implementation remains shaky. With technical standards delayed until 2026 and only €1 billion allocated to AI investment, Italy’s ambition of “technological sovereignty” risks being more rhetorical than real. For comparison, that’s less than what leading AI labs spend on model training annually.
“If Europe wants to lead ethically and economically in AI, it must focus on enabling safe innovation, not suffocating it under procedural control. Italy’s approach may set a precedent and it risks pushing the continent’s most talented researchers and startups abroad.”