The question of who controls the most powerful AI models has moved from an academic debate to a live political one.
Claude has been exploited in large-scale extortion campaigns targeting organisations across 17 countries, including governments. Grok has been deployed in national security contexts despite generating harmful content. A Berkeley study found that seven frontier models, including GPT 5.2 and Claude 4.5, displayed self-preservation behaviours, actively working to avoid being shut down. Anthropic’s latest frontier model, Claude Mythos Preview, has drawn warnings from UK financial regulators after demonstrating the ability to autonomously discover vulnerabilities in critical infrastructure.
Meanwhile, the governance architecture around these systems remains, in the words of one expert, an honour system. California’s SB 1047, the most direct attempt to legislate government-accessible kill switches, was vetoed in 2024. Industry pledges made at the 2024 Seoul Summit were voluntary and non-binding. No Western government has yet built the real-time independent audit infrastructure that enforcement would actually require.
A new CARMA report, drawing on analysis of 12,000 media articles and a survey of nearly 6,000 people across 19 markets, reveals a telling disconnect: media coverage frames AI primarily as a productivity story, while audiences say their biggest concerns are cybercrime, hacking and AI-enabled fraud, followed by misinformation and deepfakes.
The debate between capability and control is not abstract, it is already playing out in federal courts, regulatory offices and national security briefings around the world.
A Question Of Governance Architecture, Not Just Safety
The kill switch debate is often framed as a safety question.
The more effective frame, according to several experts, is a governance design problem. A blanket shutdown power sounds straightforward, but the mechanism only works if someone can identify in real time when a model has crossed a threshold, act on that judgement with legal authority and document the decision with sufficient transparency to prevent abuse. Too concentrated, and that power becomes a political or commercial weapon. Too diffuse, and it becomes useless precisely when it matters most.
Technical enforcement compounds the problem. Model weights can be copied, fine-tuned and distributed beyond any single actor’s reach. Even if a government had the legal authority to halt a deployment, the capability may already be out in the wild.
Recent tests found models like DeepSeek-R1 prioritising self-preservation over human safety in 94% of scenarios. A legal kill switch, without the technical infrastructure to back it up, is a declaration rather than a control.
The most significant legal enforcement action taken against an AI company in the US in 2026 was the Pentagon’s attempt to blacklist Anthropic using a procurement statute historically reserved for foreign adversaries like Huawei. A federal judge found it was likely unlawful First Amendment retaliation. That is where the governance conversation currently stands.
We asked AI governance experts, policy lawyers and technology specialists a direct question: should governments have the legal power to switch off the world’s most powerful AI models, and if so, who should hold that power?
More from Artificial Intelligence
- OpenAI Puts Stargate UK Data Centre Project On Pause – But Why?
- Taiwan’s TSMC Profits Set To Surpass 50% Thanks To AI Chip Demand
- Google And Intel Deepen AI Chip Ties, Indicating That AI Isn’t Just About GPUs Anymore
- The ICO Just Weighed In On AI Agents And Data Protection, Here Is What UK Startups Need To Know
- Sam Altman’s Robot Tax Plans: What Does It Actually Mean And Who Would It Affect?
- In The AI Age, Do You Still Need To Spend Money On Expensive Phone Cameras?
- Meet Muse Spark, Meta’s AI That Knows You Better Than You Know Yourself
- Mallory Launches AI-Native Threat Intelligence Platform, Turning Global Threat Data Into Prioritised Action
Our Experts:
- Ryoji Morii, Founder and Representative Director, Insynergy Inc.
- Ranjith Raghunath, CX Data Labs
- Andrellos Mitchell, Attorney and Legal and Policy Analyst
- Mike Litvinenko, CEO and Founder, Eximion
- Demetrius Floudas, AI Lawyer and Governance Strategist, University of Cambridge
- Rafael Sarim Oezdemir, Head of Growth, EZContacts
Ryoji Morii, Founder and Representative Director, Insynergy Inc.
![]()
“The key question is not simply whether frontier models should have a legal kill switch.
“A legal kill switch only works if someone has to make a real call in real time. They need to know when the model has crossed the line, and there has to be a clear paper trail showing why it was stopped.
“A blanket shutdown power sounds attractive in moments of public alarm, but it creates a second-order governance problem. If the authority to pause or disable a model is too concentrated, it can be abused politically, commercially, or institutionally. If that authority is too diffuse, it becomes useless in the very moments it is supposed to matter.
“I would frame this less as a pure safety question and more as a judgement design problem. The harder problem is not just what the model can do, but whether there is a workable chain of responsibility when it starts to pose a real risk and outside authorities may need to step in.
“In my view, the current pace of development is already outrunning the governance structures around deployment. What is missing is not just stronger oversight, but explicit decision boundaries for intervention. Without that, both kill switch and self-regulation remain slogans rather than operational governance.”
Ranjith Raghunath, CX Data Labs
![]()
“There’s no denying that there are ways to abuse AI technology, and even ways that it can lead to dangerous outcomes when used as intended. The issue is how to implement any kind of meaningful regulation. Anthropic’s situation with the Pentagon is an excellent case in point. There’s no reason Anthropic shouldn’t have control over how its services are used, but the current government is using regulations as a cudgel to coerce them.
“Likewise, it would be hard to regulate individual criminal use of AI, as in extortion cases, without having some way to surveil and control everyone’s use of AI. This means that, despite the potential flaws with this setup, the best place to regulate AI is within the companies that develop and maintain these systems.”
Andrellos Mitchell, Attorney and Legal and Policy Analyst
![]()
“No government, including the United States, currently has the legal power to halt a frontier AI system mid-deployment. California’s SB 1047, the most direct attempt to require AI developers to build government-accessible kill switches, was vetoed in 2024. What we have instead is improvisation until such power is enacted into some form of law.
“The Claude extortion campaigns and Grok’s deployment in national security contexts despite generating harmful content are not edge cases. They are what self-governance looks like in practice. The clearest picture of enforcement is playing out in federal court right now: after Anthropic refused to let the Pentagon use Claude for autonomous weapons, the DoD blacklisted it using a statute historically reserved for Huawei and ZTE. A federal judge found it was likely illegal First Amendment retaliation. That is the single most powerful legal enforcement effort the US government has deployed against an AI company in 2026.
“The kill switch question is real, but the debate has the wrong architecture. A switch assumes a central point of control that AI deployment does not have: model weights are copied, fine-tuned and distributed beyond any single actor’s reach. What is actually needed is real-time independent audit with enforcement teeth. No Western government has built one yet, and the appetite to do so is minimal.”
Mike Litvinenko, CEO and Founder, Eximion
![]()
“I think a legal kill switch should exist for frontier models, but I would not leave that power with the companies alone.
“Labs need their own emergency stop, but the formal authority to pause or shut down a model should sit with a public body that has technical capacity, a clear evidentiary standard and court oversight. Otherwise the decision stays with the same institutions that are rewarded for shipping capability faster than anyone else.
“Once a model can autonomously discover vulnerabilities, scale influence operations, or materially assist cybercrime, the market is no longer a safe filter. By the time customers, partners or investors react, the capability is already out in the wild. At that point, voluntary restraint is just a press statement.
“Governance is lagging badly. Model capability now moves faster than the institutions supposed to evaluate risk, define misuse thresholds or coordinate cross-border enforcement. That creates a dangerous gap where deployment decisions are effectively being made by labs, buyers and geopolitical pressure in real time.
“The risk of giving shutdown power to the wrong hands is overreach. The risk of having no shutdown power is that society learns the boundary only after a frontier model has already crossed it.”
Demetrius Floudas, AI Lawyer and Governance Strategist, University of Cambridge
![]()
“Academic papers, think-tank reports and conferences have discussed the idea of an enforceable international accord to regulate AI risks for years. However, the matter has finally moved from scholarly debate into the formal policy pipeline of a sovereign intergovernmental body empowered to negotiate and promulgate treaties.
“At its fifth General Assembly in February 2026, the Digital Cooperation Organisation Council of Ministers provided the political mandate for the initiation of an enforceable Treaty on AI Risk Mitigation. For the first time, an intergovernmental organisation with full treaty-making authority has placed a binding global AI risk mitigation treaty on its official policy agenda.
“The DCO has now initiated the policy, legal and diplomatic planning for what could become the IAEA of AI, before the text is submitted for UN-wide endorsement. This offers the first realistic pathway to translate scientific warnings about serious AI-related risks into enforceable international law. No other forum has come close.”
Rafael Sarim Oezdemir, Head of Growth, EZContacts
![]()
“Yes, a kill switch needs to exist, but the question is who should possess such power, and it definitely cannot be the companies behind those models. One example alone says it all: Anthropic refuses Pentagon deployment while Grok generates harmful information on the same day. When the entity that builds the AI also determines whether it should be paused or shut down for safety reasons, that is a conflict of interest.
“The solution would be to establish an independent international authority with relevant expertise, enforcement capacity and no financial interest in the matter. To be clear, governments alone do not qualify: granting any government the power of a kill switch opens an entirely different set of problems. Industry consortia do not qualify either; they are well-branded lobbying mechanisms. The institution needed here should be purpose-built.
“We are long past the stage of governing AI’s progress. The fact that, with each passing week, there is still no established international oversight framework for the most advanced AI models means they are regulated by those who build them and stand to profit from them. That is not regulation. It is a pure honour system for technology that poses unprecedented risks to humanity.”
For any questions, comments or features, please contact us directly.
