Elon Musk Puts Grok’s AI Image Tool On X Behind A Paywall After Deepfake Scandal

X has limited Grok’s image generation and editing tool to paying subscribers after a wave of misuse on the platform. Users had been tagging the chatbot under photos and asking it to digitally undress people without their consent. The altered images placed real people, mostly women, into sexualised poses. Reports showed even children appeared in a number of the images.

Business Insider reported that the tool accepted requests to remove clothing and reposition bodies into inappropriate scenes. The posts began circulating in late December and spread quickly across the platform. As the images piled up, governments and regulators in the UK, EU, Italy, India and other countries threatened or began action against X and xAI.

Grok now replies to image requests on X with a message saying image generation and editing are limited to paying subscribers. That means most users on the platform can no longer create images through the chatbot. People who pay leave their name and payment details on file, which links accounts directly to real identities, according to Business Insider.

Business Insider also reported that users who are not paying subscribers can still use Grok’s image tools on its stand alone app and website. That means the feature has not been fully removed, only restricted on the social platform itself.

Elon Musk responded to the backlash on January 3, writing on X that anyone using Grok to make illegal content would face the same consequences as uploading illegal material. The official X account also pointed users to its policy page, which says it has zero tolerance for child sexual exploitation and removes media that shows physical child abuse, Business Insider reported.

The change has not ended criticism. Democratic Representative Jake Auchincloss of Massachusetts told Business Insider the move did not go far enough. He said the platform was turning digital abuse of women into a premium product and called for tougher rules on deepfake pornography.

How Is The UK Government Reacting?

The UK government has taken a hard line on Grok’s use to create sexualised images. A spokesperson for Prime Minister Sir Keir Starmer told Business Insider the move to make the tool a paid feature simply turns an AI feature that allows the creation of unlawful images into a premium service.

Starmer described the images as disgraceful and unlawful in an interview with Greatest Hits Radio, according to Business Insider. The comments came after governments across Europe began to question X and xAI over how the tool had been allowed to operate.

The BBC reported that Technology Secretary Liz Kendall said she would back regulator Ofcom if it blocks access to X in the UK for failing to follow online safety law. She said sexually manipulating images of women and children is despicable and abhorrent.

Kendall said the Online Safety Act gives Ofcom the power to block services from being accessed in the UK if they refuse to comply with the law. She added that the public would expect updates in days, not weeks.

Ofcom told the BBC it made urgent contact with X and xAI on Monday and set a firm deadline for an explanation. The regulator said it received a response and is now carrying out an expedited assessment.

The BBC also reported that Ofcom can seek a court order to stop third parties from helping X raise money or allow access in the UK if the company refuses to comply. These business disruption measures have not been tested before.

Musk replied on X that the UK government wants any excuse for censorship. He posted in response to claims that other AI platforms were not facing the same level of scrutiny. The point here remains: people and children were taken advantage of, and more should be done to prevent the dangers of deepfake from getting worse. Turning the harm of these groups shouldn’t just be dealt with using a paywall.

Mel Hall, Legal Director at Morton Fraser MacRoberts, on Elon Musk’s decision to restrict Grok’s AI image tools to paying subscribers: “Restricting Grok’s image tools to paying users is not a solution to preventing non-compliance. Legally, the Online Safety Act goes further than takedown or access controls – it requires platforms like X to assess how their services could be used to generate illegal content and to reduce that risk before harm occurs.

“Where an AI tool enables users to manipulate images of real people in ways that could amount to non-consensual intimate images or indecent images of children, those risks should have been identified through illegal harms risk assessments before the tool was deployed or changed. Ofcom has wide enforcement powers if preventative duties have not been met.”