Italy has pulled ahead in the race to regulate artificial intelligence (AI) on the continent. In September 2025, it became the first EU nation to pass a national AI law fully aligned with the EU’s own AI Act, forcing sectors like justice, healthcare, education and public administration to abide by stricter oversight, traceability and accountability rules.
The law mandates that AI decisions be traceable to their source, enforces human oversight and criminalises the harmful misuse of AI (including deepfakes and fraud) with prison sentences of up to five years.
It’s a bold move, no doubt about it, and it comes ahead of full EU enforcement. The European AI Act, which entered into force on 1 August 2024, sets a risk-based framework for AI, categorising certain systems as “unacceptable risk”, banning them, while regulating high-risk applications under strict obligations.
With Italy already acting, the question becomes what will happen next – will other European countries follow suit, and, will these national laws conflict with or reinforce EU regulation?
A Deep Dive Into Italy’s Approach
Italy’s law can be seen as a proactive attempt to fill regulatory gaps and accelerate clarity. As the Guardian notes, the legislation aims to ensure AI remains “human-centric, transparent and safe”, while emphasising innovation, privacy and cybersecurity. It also sets aside €1 billion to support AI, cybersecurity, telecoms and related sectors, an important move demonstrates that the country is going beyond regulation and is willing to put some skin in the game, so to speak.
Alessio Butti, Italy’s Undersecretary for Digital Transformation, called the law a way to “bring innovation back within the perimeter of the public interest, steering AI toward growth, rights and full protection of citizens.” Indeed, in sectors like healthcare and other work, human decision-making must remain integral, and employees must be informed when AI is used. Ultimately, it’s all about transparency.
By contrast, the EU’s framework emphasises uniform rules across member states, aiming to avoid fragmented national regulations. The EU Act introduces obligations for transparency, risk mitigation, user rights and post-market monitoring.
The Act also bans certain prohibited practices – fairly controversial things like AI systems manipulating human behaviour or classifying people based on biometric traits or “vulnerabilities.”
That said, tensions are already emerging, which is to be expected. Some European business leaders have called for delays, warning that the heavy demands of compliance could stifle competitiveness. But the European Commission has rejected a proposed pause on implementation – “there is no stop the clock. There is no grace period. There is no pause,” said spokesperson Thomas Regnier.
More from News
- How Much Did The Co-Op Cyberattack Actually Cost?
- OpenAI To Build 5 More Data Centres Worth Over $400 Billion
- Are We Trading Clear Vision For Screen Time?
- How Are Meta’s AI Ventures Tapping Into Previously Untouched Markets?
- Why Trump’s Whopping $100,000 H-1B Fee Could Be a Golden Opportunity for the UK
- Could B2B Payments Reach $224 Trillion Valuation By 2030?
- First The Government, Now OpenAI: About Nvidia’s $100 Billion Investment
- Another Major Automaker Is Developing Self-Driving Technology
Voices from the Field: Jiahao Sun on Centralised Versus Decentralised AI
From his vantage point as CEO of FLock.io, a decentralised AI platform, Jiahao Sun sees Italy’s law as exposing core tensions in how AI is built and governed. He warns,l “the world is pushing for increased regulatory clarity for AI and for now, Italy appears to be leading the charge. They are the first country to implement the EU’s landmark AI Act and have cut no corners.
Italy’s law reiterates the main flaw in centralised AI – large-scale models depend on vast internet data, but harvesting it inevitably captures copyrighted content and personal information. Training a massive, general-purpose AI without violating these legal boundaries is simply impossible, exposing the fundamental weakness of the centralised AI approach.”
Thus, Sun argues the future lies elsewhere: “we need to be building the opposite – decentralised AI. It is a shame that the regulation does not encourage this. By keeping raw data on local devices and only sending insights to a secure blockchain, security is greatly enhanced and only approved content is processed. DeAI can also scale more efficiently, uses less energy and mitigates against political biases.”
For him, decentralised AI (DeAI) isn’t a fringe idea but a serious alternative to centralised models. “DeAI has the potential to solve numerous issues associated with the traditional centralised AI LLMs being developed by OpenAI, Google, Anthropic and others. AI developers must champion human-centered design, build safety nets and insist on transparent governance that empowers, rather than constrains existing technology. The enduring vision for AI is one where its immense potential serves all of humanity, alongside prioritising ethical integrity and innovation.” Ultimately, it doesn’t need to be one or the other.
Sun’s perspective offers contrast. While Italy’s law tightens limits on AI misuse, his view is that regulation should steer incentives toward architectures with built-in privacy, local control and decentralised trust.
So, Will Other Nations Follow Or Chart Their Own Paths?
Italy may have grabbed headlines for being first, but others are watching closely. France and Germany have historically preferred to align with EU-level regulation rather than race ahead. But with Italy already showing what a national approach might look like, momentum could build and pressure certainly is mounting.
Outside the EU, the UK’s position remains nuanced. It has backed a more agile, sector-by-sector regulatory approach rather than sweeping laws. That flexibility may appeal to startups wary of rigid mandates, though it risks lagging behind when consistency matters.
Meanwhile, the UAE is positioning itself as a global AI hub – welcoming innovation, investing in infrastructure and developing governance models that aim to balance speed and safety.
At the EU level, enforcers such as the new European AI Office will play a critical role. It’s tasked with supervising general-purpose AI (GPAI) systems and ensuring coherence across member states. To help developers, the EU has unveiled a voluntary Code of Practice for GPAI models, focusing on transparency, copyright and security, ahead of full enforcement.
But even as the EU pushes forward, critics warn that over-complex or inconsistent rules could create regulatory fragmentation – exactly what national efforts risk reproducing – causing a potential bottleneck in innovation and more. This would not only slow things down, but it may even lead to the EU losing competitive advantage in the AI space to the likes of other countries with different approaches, like the UK, the US and the UAE.
Italy’s bold leap into AI regulation marks a defining moment for European and global governance. It shows that nations can act decisively, not just in response to innovation but ahead of it. But, its new law does more than set guardrails, it raises a deeper question about how we build AI architecture in parallel with regulation. In the past, we’ve struggled to impose regulations that can keep up with innovation, but on the other hand, there’s also great risk of going too far in the opposite direction and over-regulating entirely.
Jiahao Sun’s perspective brings urgency to that question – if centralised models struggle under new legal fault lines, decentralised AI offers a way to sidestep them entirely.
Indeed, the rest of Europe now faces a test – will it adopt Italy’s pace, strike a balance or chart an entirely different course? The decisions made in the next years won’t just shape where AI is allowed – rather, they may influence how AI is built in the first place.