The UK’s independent online safety regulator, Ofcom, has opened a formal investigation into X under the Online Safety Act 2023. The investigation is examining whether the platform has complied with its duties to protect users in the UK from illegal content.
Ofcom said its initial concern came from reports that the Grok AI chatbot account on X had been used to create and share non-consensual intimate images and images of children. The watchdog described these as potentially falling under intimate image abuse and child sexual abuse material offences.
Suzanne Cater, Director of Enforcement at Ofcom, said: “Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning. Platforms must protect people in the UK from content that’s illegal in the UK, and we won’t hesitate to investigate where we suspect companies are failing in their duties, especially where there’s a risk of harm to children. We’ll progress this investigation as a matter of the highest priority, while ensuring we follow due process. As the UK’s independent online safety enforcement agency, it’s important we make sure our investigations are legally robust and fairly decided.”
Ofcom is assessing whether X has carried out sufficient risk assessments for illegal content and for children, taken steps to remove illegal material, and implemented strong age assurance systems to protect minors.
What Powers Does Ofcom Have?
The Online Safety Act gives Ofcom enforcement powers including fines of up to £18 million or 10% of qualifying worldwide revenue. In serious cases, the regulator can apply to a court for ‘business disruption measures’. These could stop payment providers, advertisers, or internet service providers from working with the platform, or block access to it in the UK.
Arnav Joshi, Partner at Perkins Coie, said: “The Online Safety Act is kicking into high gear in terms of tackling the most toxic, illegal, harmful content online head on – look at the suicide forums case, with Ofcom driving enforcement into a notorious online suicide forum seemingly linked to multiple deaths. The signal couldn’t be clearer: if a platform hosts content that encourages self-harm or suicide, the regulator will come after it, and quickly, under these new powers.”
He added: “One striking thing about the Online Safety Act is its ‘business disruption’ measures – essentially the nuclear option. In serious cases that involve uncooperative businesses, Ofcom can go to court and cut off a service’s lifelines by blocking the platform in the UK or telling payment, ISPs and advertising services to stop doing business with it. Government ministers have already said they will back Ofcom in using these measures where necessary. No platform wants to be the test case for that. The mere prospect of being shut out of the UK market – or losing all your revenue streams here – is a game-changer, but must be used sparingly, with extreme caution and ensuring legal checks and balances are met. It’s also worth remembering that the content must, under existing UK law, be ‘illegal’ in the first place.”
How Will The Investigation Proceed?
Ofcom’s process begins with gathering and analysing evidence. If it finds a company has failed in its duties, it will issue a provisional decision. The company then has an opportunity to respond before a final decision is made.
Joshi brought up how investigations can take time: “Typically, for most services, the process would involve several extensive rounds of engagement with Ofcom, some stop-gap mitigations being implemented while investigations continue, with additional requirements (and potentially fines) being ultimately imposed. Businesses can, and some no doubt will challenge some of this enforcement in court, but it’s difficult to see how they could put up a successful challenge and continue operating legally in their current form.”
Ofcom said it will provide updates as the investigation progresses.
Experts Share: Will A Possible Ban Impact Free Speech?
Arnav Joshi, Partner, Perkins Coie
![]()
“The Online Safety Act is kicking into high gear in terms of tackling the most toxic, illegal, harmful content online head on – look at the suicide forums case, with Ofcom driving enforcement into a notorious online suicide forum seemingly linked to multiple deaths. The signal couldn’t be clearer: if a platform hosts content that encourages self-harm or suicide, the regulator will come after it, and quickly, under these new powers.
“One striking thing about the Online Safety Act is its ‘business disruption’ measures – essentially the nuclear option. In serious cases that involve uncooperative businesses, Ofcom can go to court and cut off a service’s lifelines by blocking the platform in the UK or telling payment, ISPs and advertising services to stop doing business with it.
“Government ministers have already said they will back Ofcom in using these measures where necessary. No platform wants to be the test case for that. The mere prospect of being shut out of the UK market – or losing all your revenue streams here – is a game-changer, but must be used sparingly, with extreme caution and ensuring legal checks and balances are met. It’s also worth remembering that the content must, under existing UK law, be ‘illegal’ in the first place.
“The debate around so-called ‘de-nudification’ apps is also coming to a head in the UK. Ofcom is understandably under intense pressure – and is already investigating services that cross a red line, having made clear that intimate image abuse is one of them. In practice, that means AI tools that help create illegal content won’t be tolerated under UK law, and we can expect swift enforcement action.
“The legal journey through to conclusion may however be long and winding. Typically, for most services, the process would involve several extensive rounds of engagement with Ofcom, some stop-gap mitigations being implemented while investigations continue, with additional requirements (and potentially fines) being ultimately imposed. Businesses can, and some no doubt will challenge some of this enforcement in court, but it’s difficult to see how they could put up a successful challenge and continue operating legally in their current form.”
Irina Tsukerman, President, Scarab Rising, Inc.
![]()
“A potential ban on X in the UK would be a very serious step because it is not just a penalty for a company. It effectively limits a major channel of public speech and information for millions of ordinary users who did nothing wrong. Even if the aim is to stop harmful content, a platform-wide ban operates like a blunt instrument. It does not just target the illegal material, it targets the whole “town square,” including news sharing, political debate, emergency information, and community organising. That is why any ban has to clear a high bar in a democracy: it should be truly necessary, legally sound, and clearly proportionate to the harm being addressed.
“At the same time, the free speech argument cannot be treated as a shield for everything that happens on a platform. If a service is being used at scale to facilitate clearly illegal abuse, especially non-consensual sexual imagery and child-related sexual exploitation, there is a legitimate public interest in forcing rapid change. This is not a “hurt feelings” category of harm. It can be life-damaging, it can be criminal, and it can be very difficult to undo once it spreads. When regulators talk about the strongest tools, it is usually because they believe the company has not built effective systems to prevent predictable abuse. In that sense, the debate is not speech versus safety in the abstract. It is whether a company is meeting basic duties to prevent serious illegal harm.
“If Ofcom were to approve or pursue a block, the key issue for free speech is precedent and threshold. Once a democratic state demonstrates that it can block a major social platform, future governments can point to that precedent for other cases. Even if the first case is widely agreed to be extreme, later cases may be more political, more contested, or less clearly tied to illegal content. That is where free speech concerns become real. People worry not only about what is blocked today, but what becomes easier to block tomorrow. A “rare emergency power” can gradually become a normal policy tool.
“A ban also raises practical questions about whether it would work as intended. Many users would move to workarounds or alternative platforms. Some of those alternatives are less moderated, more encrypted, and harder for law enforcement to track. In that scenario, a ban might reduce mainstream visibility but increase concentration of harmful activity in darker corners online. That is not an argument for doing nothing, but it is a warning that bans can displace problems rather than solve them. Regulators need to ask whether a ban reduces harm overall or just changes its shape.
“There is also the risk of punishing the wrong group. Most users are not the perpetrators of the abusive content regulators are worried about. A platform ban punishes journalists, researchers, activists, small businesses, and ordinary people who use X for legitimate purposes. That matters ethically and politically. A democracy typically tries to target enforcement as narrowly as possible: remove illegal content quickly, identify perpetrators, penalize repeated failures, and compel design changes that prevent recurrence. A ban is justified only if narrower tools have failed or cannot realistically work.
“From a “necessary cause” perspective, the strongest argument for a ban is repeated, demonstrable non-compliance. If regulators conclude the company cannot or will not implement basic safety systems, and if illegal content continues at scale, then escalating consequences is rational. The point of escalation is not moral outrage, it is compliance. Heavy sanctions, mandated technical changes, independent audits, and rapid reporting obligations are often designed to avoid the need for a ban by forcing the platform to become safer. A ban is the endpoint when everything else is judged insufficient. In that sense, even the credible threat of a ban can be a tool to extract compliance.
More from News
- Can Starlink’s Low-Cost Broadband Break BT’s Grip On The UK Market?
- Why Are More UK Founders Choosing To Self Fund?
- One In Four UK Workers Say They Applied For Non-Existent Jobs, Known As Ghost Jobs
- Uganda’s Internet Blackout Shows How Modern Technology Can Be Used As A Political Tool
- Leighton Launches New AWS Practice To Support Companies With Digital Transformation Goals
- Why Did London Financial Services Hiring Slow At The End Of 2025?
- Wikipedia Turns 25: How Is Wikipedia Still Standing In the Age Of AI?
- CrowdStrike Shareholder Lawsuit Dismissed After Court Rejects Fraud Claims
“A careful way to frame this is that free speech and safety are not enemies, they are both public goods. The public cannot speak freely if people are being blackmailed, sexually exploited, or terrorised into silence. But the public also cannot speak freely if governments normalise blocking major speech platforms whenever regulators and ministers lose patience. The right question is: what is the minimum intervention that meaningfully reduces serious illegal harm while preserving open access to lawful speech? If the answer is “a ban,” regulators should show their work very clearly. They should explain why less restrictive steps are insufficient and define measurable conditions for lifting restrictions once compliance is achieved.
“If a ban were implemented, safeguards would matter. There should be a transparent legal process with independent oversight. There should be clear standards for what compliance looks like and what timeline is expected. There should be a route for challenge and review, and there should be a clear explanation to the public about the harm being targeted and why this step is proportionate. Without these safeguards, a ban risks being perceived as political censorship even if the underlying harm is real. Perception matters because legitimacy is the currency of regulation.
“Finally, there is the long-term lesson: platform governance is increasingly being treated like public infrastructure regulation. Governments are moving toward the idea that if you operate a major platform in a country, you accept responsibilities that go beyond “we host speech.” That does not mean companies become state-controlled. It means the baseline includes robust systems that prevent clear criminal abuse. The central question for the UK is whether it can enforce those responsibilities firmly without crossing into broad, habitual content control. Getting that balance right will define whether this is seen as protecting citizens or restricting civil society.”
Jonathon Narvey, CEO and Founder, Mind Meld PR Inc.
![]()
“There’s no question that the UK government is stepping on free speech. X is a place where free speech happens, or at least is supposed to happen. It’s a place of lively conversations about the most urgent issues of the day, on just about any topic.
“By the way, just to put this in context, there’s already a quasi-official ban on X. Mainstream” reporting shows the UK has arrested tens of thousands of people over social posts, on X as well as elsewhere. Many of them have been jailed, often for lengths of time that look absolutely draconian, especially compared with actually violent offenses.
“I say the ban is quasi-official, because it seems like the rather low bar for getting a visit from the police seems to be that at least one person has complained they were offended. But the whole point of free speech is that it’s supposed to include offensive speech. Free speech isn’t needed to protect people over whether they think Taylor Swift is better than Lady Gaga, or which color of car you prefer to drive.
“When the rule is that you can get into legal trouble just for voicing an opinion, but you don’t know which opinions are allowed, this will naturally create incentives for people to self-censor.
“Whether an official ban happens or not, there’s already a secret ban on X by the UK government. It’s just that right now, enforcement is effectively arbitrary. People need clear laws in order to follow them. When the cops can lock you up for years based off of saying even common-sense things that we all agreed on until 5 minutes ago, you’re not living in a free society.”
Jake Third, CEO, Hallam
![]()
“I used to love Twitter. And I’m pro free-speech.
“Beyond the obvious limits around incitement to violence, government’s should never infringe on an individuals right to speak their mind.
“But X has become a swamp of racist memes, violent videos, pornographic images, antisemeticism, and disinformation bots.
“We as a company have the right to choose whether we want our brand to be associated with that.
“And since X no longer reflects the standards we expect for our people, our clients, or our brand – we are choosing to step away.
“As the regulator for the UK’s communications services, tasked with ensuring Brits don’t get scammed and are protected from harmful practices, I would therefore welcome a move by Ofcom to support a ban on X.
“The UK government and our regulators need to ensure that, as consumers and businesses alike, we do not support or rely on platforms that incite hatred and harm. As technology leaders, we have a duty to ensure that the platforms we use and advertise on take responsibility for keeping people in the UK safe online.”