President Donald Trump signed the Take It Down Act on the 19th, the White House announced. The law adds federal penalties for sharing non-consensual intimate images, whether genuine or AI-generated. Platforms must erase flagged material within 48 hours, and investigators can now charge offenders under a clear national rule. First Lady Melania Trump steered the bill through its final stages.
Senators Ted Cruz and Amy Klobuchar praised the measure as a landmark for survivors of digital abuse. According to the White House briefing, advocates such as Brandon Guffey and Francesca Mani shared personal stories that helped the text reach the President’s desk.
How Did Officials And campaigners Welcome The New US Law?
Praise followed within minutes of the signing. X chief executive Linda Yaccarino told reporters that her platform will keep working with lawmakers and the National Centre for Missing & Exploited Children to make the internet safer, especially for children.
South Carolina Attorney General Alan Wilson said “revenge porn” is “a cruel and deeply violating crime” and that the Act proves digital abuse now attracts real penalties. The National Organisation for Women added that deepfake harassment strips bodily autonomy from women and girls.
House Majority Whip Tom Emmer welcomed tougher sentences, Representative Maria Salazar said platforms can no longer look away, and Senator John Cornyn underlined the rule that forces sites to delete unlawful images within 2 days.
What Online Safety Measures Already Operate In The UK?
The Online Safety Act 2023 gained Royal Assent on 26 October 2023. A government explainer dated from last month says the Act hands Ofcom new oversight of social media and search services.
Platforms must run risk checks, set age gates and remove unlawful posts. The tightest guards apply to children. Providers that publish explicit content must now apply tougher age verification, and all in-scope services finished risk assessments for illegal material in March 2025.
The law also protects adults. Category 1 sites must hand users identity-based filters so they can block anonymous abuse. Cyberflashing, intimate image abuse and threatening communications became crimes on 31 January 2024, and early prosecutions have already reached the courts.
Fines of up to £18 million or 10% of worldwide turnover back up Ofcom’s directions. The regulator can even ask payment processors and internet service companies to cut ties with sites that refuse to follow takedown orders.
How Is UK Legislation Not Quite Covering Deepfakes?
Campaigners warn that synthetic abuse does not fit neatly under every part of the Online Safety Act.
The statute covers intimate image abuse, but only when a picture shows sexual content. A deepfake that places a public figure’s face on a clothed body may dodge that definition, leaving victims with slower privacy or harassment claims.
Platform speed creates another worry. The US law fixes a 2 day deadline. The UK framework talks about “systems and processes” that must be proportionate to each company, yet it sets no fixed time.
Small forums face practical limits. Without clear technical rules, a hobby site might struggle to build automatic detection or hire moderators with specialist skills.
There is dispute over content that is harmful rather than unlawful. Ofcom’s upcoming codes will explain how to handle legal deepfakes that humiliate targets, but until those codes arrive, enforcement may vary.
More from Artificial Intelligence
- Dropcall Launches UK’s First AI Voicemail App
- AI and the Death of Forgetting: What Are the Implications of AI Remembering Everything?
- How Is Artificial Intelligence Used For Kids’ Toys?
- Experts Share: What Risks Do Investors Face If Superintelligence Never Arrives? N
- Sam Altman Claims We’ve Passed the AI Event Horizon: What Does That Mean?
- What Is A Digital Clone?
- Product Engineering Approaches For Building UX in Generative AI Tools
- Experts Comment: What Do Investors Want To See In Emerging AI Startups?
Could Parliament Pass A Law That Targets Deepfakes Specifically?
Legal scholars say a clearer law would remove doubt for police and survivors, who view current offences as a patchwork. A more focused bill could look only at deepfake abuse, avoiding the wider free speech fights that slowed the Online Safety Act.
Children’s charities back sharper rules and report that AI imagery spreads faster than moderators can act. Tech companies may favour certainty as well. An explicit timetable and evidence standard could limit legal risk and channel spending toward detection tools.
Civil rights lawyers fear artistic parody may get caught. Any draft would need tight wording so satire, reportage and protest stay safe.
Lawmakers would also need a plan for overseas platforms. Ad-revenue sanctions or service blocking orders could help when a foreign website ignores British rulings.
What Do Experts Think: Should The uK Bring A Law For Deepfakes?
We’ve asked experts what they think the uK should do in relation to deepfakes… Here’s what they’ve shared:
Kyle Tut, Co-Founder & CEO, Pinata said:
“The UK government can enact similar legislation that holds internet platforms and users accountable while promoting public education on digital literacy. The UK could take one step further by utilising open source solutions to enhance transparency, detect manipulations, and verify authenticity. To truly combat AI misuse, passing legislation alone is not enough. Governments need to rely on technological innovations to address the evolving challenges as AI matures. There are many existing tools that detects and mitigates deepfakes by verifying data at the source of creation.
The Take It Down Act makes it illegal for anyone to publish non-consensual intimate images or videos on the internet, specifically combatting AI-generated deepfakes and online exploitation. With AI becoming more accessible, deepfakes is growing into a major societal issue. From online harassment to fake news and misinformation, AI’s ability to overwhelm any piece of computer-generated data threatens our trust in the internet.”
Yu Chen, Professor of Electrical and Computer Engineering, Binghamton University, State University, New York said:
1. “The law targets the nonconsensual publication of sexually explicit images, including those generated by AI, which is critical given the increasing accessibility of AI tools that create realistic deepfakes. By criminalising the publication or threat of publication of such content and mandating platforms to remove it within 48 hours, the legislation provides a legal framework to protect victims and hold perpetrators accountable.
2. “The overwhelming bipartisan support (409-2 in the House, unanimous in the Senate) reflects a rare consensus on the urgency of this issue. This broad coalition, including tech giants like Meta and Google, underscores the societal recognition of deepfakes as a serious threat.
3. “As one of the first major U.S. laws directly tackling AI-generated content, the Take It Down Act sets a precedent for regulating AI’s societal impacts. However, it also highlights the complexity of balancing victim protection with privacy, free expression, and technological innovation. To researchers like myself, it is valuable to study how this law is implemented, particularly how the FTC navigates enforcement and whether the law withstands anticipated legal challenges on First Amendment grounds.
4. “Questions remain about how platforms will verify nonconsensual content, how victims will navigate the takedown process, and whether the law will deter the creation of deepfake tools.”