Prime Minister Pedro Sánchez has formally requested that Spain’s Public Prosecutor’s Office examine whether large platforms could face criminal liability over the spread of AI generated child sexual abuse material. The request targets X, Meta and TikTok over the alleged spread of AI generated CSAM.
Dr Pain of the University of Nevada, Reno said, “Spain’s decision in February 2026 to ask prosecutors to investigate major social media platforms marks a significant escalation in Europe’s ongoing effort to regulate the digital environment.”
Spanish Prime Minister Pedro Sánchez tweeted, “Today, the Council of Ministers will invoke Article 8 of the Organic Statute of the Public Ministry to ask it to investigate the crimes that X, Meta and TikTok may be committing through the creation and dissemination of child pornography using their AI.
“These platforms are jeopardising the mental health, dignity, and rights of our sons and daughters.
“The State cannot allow this. The impunity of the giants must end.”
In a letter first reported by El País, the government pointed to what it described as the abundance of such material circulating on social media.
Spain’s constitutional structure means the executive cannot order prosecutors to open a case. Attorney General Teresa Peramato must first consult the Board of Prosecutors of the Supreme Court before deciding on any investigation.
How Does This Connect To EU Law?
“From an EU policy perspective, Spain’s move aligns with—but also intensifies—the regulatory trajectory established by the Digital Services Act (DSA),” Dr Pain said.
The Digital Services Act already requires large platforms to mitigate systemic risks, including harm to minors and the spread of illegal content. Spain’s action tests whether national criminal law can operate alongside EU level enforcement.
“By asking national prosecutors to examine criminal offences, Madrid is effectively testing the outer limits of platform accountability within the European legal framework. This could create a precedent whereby member states pursue parallel enforcement strategies alongside Brussels.”
The European Commission has opened its own investigation into X over sexually explicit deepfakes generated by its Grok AI chatbot. French authorities recently raided X’s Paris headquarters in a related probe. Spain’s move increases pressure on platforms already facing action across Europe.
“Another key implication concerns regulatory fragmentation within the EU,” Dr Pain added. If multiple member states pursue overlapping criminal or administrative cases, platforms could face uneven enforcement across the single market.
More from News
- One In Four Brits Don’t Feel Confident Investing: What Is Holding Them Back?
- Is Decentralisation Making Crypto Hard to Police?
- How Are UK Schools Managing Students’ Screen Time This Year?
- Could Trump’s Tariffs Trigger A UK Payments Reset? British Banks Discuss Visa And Mastercard alternatives
- Is Fanvue Having The Craziest Q1 In European Tech Right Now?
- UK And EU Workers Are Changing Their Career Paths This Year, Here’s How
- UK Economic Growth Stalls Again – Is Britain Still A Good Place To Build A Startup In 2026?
- How Is AI Impacting How Users Access Online Search Tools And Social Media?
What Evidence Is Motivating Madrid’s Action?
“At its core, Spain’s action reflects growing alarm about the intersection of generative AI and platform amplification,” Dr Pain said.
The Spanish government has cited evidence that one in five young people in Spain, mostly girls, reported AI generated fake nude images of themselves being shared online. That statistic underpins the urgency behind Sánchez’s request.
Earlier this month, Sánchez announced he would ban children under 16 from accessing social media. He is also proposing that repeat violations by tech executives be treated as criminal offences and that algorithm manipulation be criminalised. Spain’s Youth Minister Sira Rego has floated banning X outright. Deputy Prime Minister Yolanda Díaz said she had left the platform, arguing that remaining users were “feeding the politics of hatred.”
TikTok rejected any suggestion that it tolerates such material. A spokesperson said, “CSAM is abhorrent and categorically prohibited on our platform. TikTok has robust systems in place to thwart attempts to exploit or harm young people, and we continue to prioritize and invest in advanced technologies to stay one step ahead of bad actors.”
What Are The Risks For Digital Rights In Europe?
“Critically, the Spanish initiative reflects a broader European shift toward child-safety framing as the primary justification for digital regulation,” Dr Pain said.
He said that when regulation is driven mainly through child protection narratives, fundamental rights such as privacy, anonymity and freedom of expression may receive less attention.
“Indeed, critics warn that Spain’s wider package of measures, including a proposed social-media ban for under-16s, could expand state and platform surveillance.” The prime minister has previously advocated reducing online anonymity and increasing traceability of users, proposals that digital rights groups fear could chill speech.
“Ultimately, Spain’s probe represents both an opportunity and a risk for EU internet safety,” Dr Pain said. It presses the European Union to confront AI driven sexual abuse and platform amplification, and it forces lawmakers to consider how to protect minors without undermining digital rights across Europe.
What Do Other Experts Say?
On how this move impacts the EU, Trevor Horwitz, CISO and Founder at TrustNet shared: “If Spain moves to restrict social media access for minors or increases enforcement under the Digital Services Act, there are direct privacy implications. One example is if stronger age verification or age assurance is required, this typically poses the need for platforms to process additional personal data to determine a user’s age. That can increase the volume and sensitivity of data being collected.
“However, it’s important to note that the DSA does not override data protection law. Any measure, if reinforced within the EU, remains subject to the GDPR. Platforms must comply with data minimization, purpose limitation, and lawful processing requirements. Even when the objective is child protection, companies are still required to ensure that any personal data collected is necessary, proportionate, and adequately protected.
“What this means, in practical terms, is that child safety enforcement and privacy compliance operate simultaneously. Platforms will need to demonstrate that protections for minors are implemented in a way that complies with existing EU data protection rules. Regulators are not only examining whether platforms reduce systemic risks to children, but also whether those controls are implemented within the boundaries of established privacy law.”