Anthropic has taken the United States government to court after the Department of War labelled it a “supply chain risk” to national security. The designation followed a breakdown in talks over guardrails on how Anthropic’s AI models could be used, particularly in relation to mass domestic surveillance and fully autonomous lethal weapons.
Defence Secretary Pete Hegseth said that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” extending the impact of the label across military suppliers.
In a statement published on 5 March, Anthropic chief executive Dario Amodei said: “Yesterday (March 4) Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America’s national security.” He added, “We do not believe this action is legally sound, and we see no choice but to challenge it in court.”
The company has filed lawsuits in two courts and is seeking a temporary order that would allow it to continue working with military contractors as the case proceeds.
What Are The Financial Stakes?
In court papers reported by Business Insider, Anthropic’s chief financial officer Krishna Rao wrote that hundreds of millions of dollars in expected revenue tied to Pentagon related work are at risk this year. Rao said that if the government discourages companies from working with Anthropic more broadly, the company could lose up to $5 billion in sales. That figure is roughly equivalent to its total revenue since it began commercialising its AI technology in 2023.
Anthropic’s chief commercial officer Paul Smith said in a separate filing that business partners are reacting strongly. He wrote that the government pressure is causing actions that “reflect deep distrust and a growing fear of associating with Anthropic.” He added that customers have paused negotiations, demanded escape clauses or cancelled meetings after the supply chain designation.
Amodei said the scope of the letter from the Department of War is limited. “The Department’s letter has a narrow scope, and this is because the relevant statute (10 USC 3252) is narrow, too. It exists to protect the government rather than to punish a supplier,” he wrote. He also said the law requires the Secretary of War to use “the least restrictive means necessary to accomplish the goal of protecting the supply chain.”
More from Artificial Intelligence
- Anthropic Continues To Push Back Against Pentagon Over Autonomous Weapons And Mass Surveillance
- “If They Knew, They Wouldn’t Be Recording”: Meta’s Ray-Ban Smart Glasses Trigger A Major Privacy Lawsuit
- Singapore Positions Itself As A Global AI Leader Through Workforce Training
- Cancel GPT Is Trending: Has OpenAI’s Contract With The Pentagon Undermined Public Trust?
- How Do AI And VoIP Make Business Communication Smarter?
- How Do Early Stage Startups Utilise AI Tools to Scale?
- From Blog Post To Billions: Meta Backs “Personal Superintelligence” With Major $100 Billion Chip Deal
- What Are The Risks Of Agentic AI, According To UK Banks?
Why Are Rival AI Researchers Backing Anthropic?
The dispute has gained support from researchers at competing companies. More than 30 researchers from OpenAI and Google, including Jeff Dean, signed a joint amicus brief backing Anthropic. They signed in a personal capacity.
Their filing said: “If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.”
OpenAI chief executive Sam Altman wrote on social media that enforcing the supply chain risk designation “would be very bad for our industry and our country,” even though OpenAI signed its own Pentagon contract after Anthropic’s talks collapsed.
Major cloud providers such as Amazon and Microsoft have said they will continue to offer Anthropic’s Claude models to customers who do not have Pentagon ties.
What Does This Mean For AI Competitors?
Essentially, the case tests how far the US government can go in limiting a private AI company’s access to defence related business when policy disagreements arise. Anthropic says its objections relate only to “our exceptions on fully autonomous weapons and mass domestic surveillance,” adding, “We do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making—that is the role of the military.”
If the courts side with the Pentagon, rivals that rely on federal contracts may be forced to align their policies with defence priorities. If Anthropic wins a temporary order, competitors could gain reassurance that disagreements over AI guardrails do not automatically block access to government work.
In the mean time, the dispute places billions of dollars, defence relationships and the direction of US AI policy under judicial review, with consequences that stretch well past one company.