When DeepSeek released its V3 model early last year, it had an immediate impact on US markets. CNBC reported that the Nasdaq composite went down 3% on the day of the launch. Shares in Nvidia went down 17%, wiping out $600 billion in market value before recovering later.
V3 was reported to have cost under $6 million to produce and was trained using lower powered Nvidia chips. That contrasted with the scale of spending by large US technology groups. Amazon, Microsoft, Meta and Google spent hundreds of billions of dollars on AI across 2025 and are expected to spend another $650 billion in 2026.
DeepSeek is now preparing to release its V4 model. CNBC said the launch is expected soon based on the company’s previous release pattern. Investors are watching closely because AI shares carry heavy weight in US indices, and past launches have influenced trading activity.
Has DeepSeek Used Restricted US Chips?
Reuters reported that a senior Trump administration official said DeepSeek’s latest model was trained on Nvidia’s most advanced AI chip, the Blackwell. US export controls overseen by the Commerce Department bar Blackwell shipments to China.
The official said US policy is: “we’re not shipping Blackwells to China.” Reuters also reported that the US believes DeepSeek will remove technical indicators that could reveal the use of American AI chips. The chips are believed to be clustered at its data centre in Inner Mongolia, according to the same source.
Nvidia declined to comment. The Commerce Department and DeepSeek did not respond to requests for comment. The Chinese embassy in Washington said Beijing opposes “drawing ideological lines, overstretching the concept of national security, expansive use of export controls and politicizing economic, trade, and technological issues.”
Chris McGuire, who served on the White House National Security Council under President Joe Biden, said: “This shows why exporting any AI chips to China is so dangerous. Given China’s leading AI companies are brazenly violating U.S. export controls, we obviously cannot expect that they will comply with U.S. conditions that would prohibit them from using chips to support the Chinese military.”
What Are The Distillation Allegations?
In a blog post uploaded a couple of days ago, Anthropic said it had identified “industrial-scale campaigns” by DeepSeek, Moonshot and MiniMax to extract capabilities from its Claude model. The company said the three labs generated more than 16 million exchanges through about 24,000 fraudulent accounts, in breach of its terms of service and regional access restrictions.
Anthropic explained that distillation is a common and legitimate training method when used internally. It added that it can also be used “to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”
In relation to DeepSeek, Anthropic said the activity involved more than 150,000 exchanges. It said the operation targeted reasoning tasks, grading tasks used for reinforcement learning and censorship safe alternatives to politically sensitive queries. Anthropic said it attributed the campaign through IP address correlation, request metadata and infrastructure indicators.
Anthropic added: “Illicitly distilled models lack necessary safeguards, creating significant national security risks.” It said such models could be used in military, intelligence and surveillance systems and that if open sourced, “this risk multiplies as these capabilities spread freely beyond any single government’s control.”
All these things together: the upcoming V4 release, the export control questions and the distillation claims keep up the DeepSeek debate in the industry about competition, security and access in advanced AI development.
What Do Experts Say?
After asking experts three things: is DeepSeek’s low-budget model raising questions about regulation, viability and AI power, here’s what they said:
André Ahlert, Managing Partner at AEX says:
![]()
“The true issue is not that DeepSeek utilised restricted Nvidia chips; rather, it is that hardware-based export controls are a leaky bucket. Chips move through intermediaries, including shell companies and third-party countries; once in a data centre, software and know-how finish the job.
“Previous technical cooperation between DeepSeek and Nvidia demonstrated that constraining chip availability was never the only limiting factor. If curtailing artificial intelligence’s frontier in China is the objective, the right lever must be deployed (computational access, data, and talent; not solely H100’s and Blackwell’s).
“Regarding sustainability: DeeepSeek’s “$6 million model” narrative against the idea of pure efficiency; if significant elements arise from the use of restricted hardware to provide distilled western computational models, then we are no longer targeting accurate methods of measuring cost and considering instead regulatory/legal risks.
“All companies and regulators will demand information about chips, data, and terms when evaluating supply chains. In my mind, the low-cost location provides an attractive option for experimentation or for fledgling companies, but ultimately, for any company with regulated products or services or for all long-term environments, the hidden cost risk premium will raise or exceed physical assets being zero rated. The industry should use this as a wake-up call: cost leadership predicated on grey area elements is not sufficiently viable.”
“Secondly, if the distillation accusations against Anthropic are accurate, they fundamentally change how we view API access and security for frontier models. Instead of simply copying a few responses, this allows for an industrial-scale extraction of reasoning and behavior from a closed model into competing tools that do not share the same safety or usage constraints.
“As a result, we see the transfer of capabilities with little accountability from a technical and policy standpoint. Additionally, the existence of “censorship-safe” versions and other alternatives to sensitive requests suggests that the intent of destilling was not only for performance, but also to create the possibility of politically deployable capabilities.
More from Artificial Intelligence
- OpenAI’s Investors Are Diversifying Into Anthropic – What Does This Mean For Competition And Strategic Alignment?
- Perplexity To Cut Ads On Its Platform: What Does This Mean For Competitors?
- Is The India AI Summit Chaos A Sign That The AI Market Is Overheating?
- Quantum Computing: Is It The Next Data Centre Revolution, Or Just Another Tech Hype Cycle?
- How Marco Robinson Is Using AI To Build Profitable Human-Led Business Chapters At Scale
- Nvidia’s Mega Deal With Meta Signals A New Era Of Strategic Chip Alliances In AI
- Has Trump’s AI Executive Order Given Startups Clarity Or Created New Barriers?
- Is Europe’s Cautious Approach To AI Becoming A Problem For Startups?
“The current level of “panic” in the US about V4 being ready soon is warranted. There are two leaks occurring simultaneously: one regarding who can apply custom silicon to chip fabrication (i.e., who will have the ability to train), and the other regarding who has the ability to copy output (i.e., the output from each product). As a result, the understanding that western labs have the resources to outspend and outrun all other labs will very rarely be true.
“I suspect that the industry will either need to treat frontier APIs as critical infrastructure with strict access control and auditing, or come to terms with the fact that any API available to the public will be subject to distillation. There is no clear path in between these two options; the regulatory environment will push towards the first option while open-source and low-cost options will push towards the second. How this tension between the regulatory and open-source environments resolves itself over the next five years will dictate the landscape of AI competition.”
“Lastly, DeepSeek has made headlines recently due to claims of using chips, alleged Claude Distillation, and ultra-low pricing. The main factor for an expert is the trend. We are watching a new challenger producing at low cost and fast, which contrasts with the dominant West narrative regarding safety, alignment and controlled release. The claims about distillation and chips suggest that not only are challenger companies using better science but also additional inputs that were cared not to allow them to have use of. This alters the debate from whether smaller lean teams can outperform larger teams to what rules apply.
“My opinion is that the export control on chips will not last; we need a consistent regime for compute, data and model access. If distillation can be at scale as claimed, there will need to be a discussion about keeping frontier APIs open without them becoming an unregulated training ground for competitors. Further, the panic in the U.S. with regard to the lead on V4 is not just around one model – it is also about whether there is currently an advantage that is structural or fragile; it is likely more fragile than people realise. The important issues are around governance and enforcement rather than the raw talent capability.”
Collin Hogue-Spears, Senior Director Distinguished Technical Expert at Black Duck Software says:
![]()
“What are the implications of reports that DeepSeek is using Nvidia AI chips despite US bans?
“Short Answer: DeepSeek reportedly trained its pending model on banned Blackwell chips because export controls restrict sales, not enforcement.
“Long Answer: DeepSeek told the world it built a frontier AI for $6 million, but that was only the cost of the last training run, which was cheaper than U.S. competitors, but the real gap was the distance between a marketing story and an engineering budget padded with restricted hardware, intermediary companies, and third-country data centres. You do not build a 671-billion-parameter model on a startup budget. You build it on a stockpile of restricted chips and then announce the number that makes the best headline.”
“What does it mean that DeepSeek is being accused of fraudulently using Claude data?
“Direct Answer: Anthropic alleges DeepSeek and two other Chinese labs used 24,000 fake accounts to extract capabilities from its Claude model at an industrial scale.
“Long Answer: Washington spent three years trying to slow Chinese AI with chip bans. DeepSeek’s reported response was to skip the hardware debate entirely and photocopy the answers from an American model’s homework. Anthropic says 16 million interactions transferred reasoning, coding ability, and safety-bypass techniques in a single operation. The chip ban locked the front door; The chip ban locked the front door. They copied the answers through the back. Higher capability equals higher evaluation.”
“How does all of this connect to US panic ahead of DeepSeek V4’s release?
Direct Answer: I don’t see a U.S. panic, but there’s perhaps some level of apprehension at top AI companies, like OpenAI, Anthropic, and Google. The V4 anticipation serves DeepSeek because attention inflates perceived capability beyond demonstrated performance.
“Long Answer: DeepSeek’s first model wiped $590 billion off Nvidia’s market cap in a single day. The company does not need V4 to be finished to trigger the next shock; it needs V4 to be anticipated. Every headline about a pending release that threatens American AI dominance is a press release DeepSeek never had to write. US investors and policymakers are not panicking about technology. They are panicking about a story.”
“Tactical Takeaway: Technology decision-makers should evaluate AI models on two separate axes. First, performance: published benchmarks and independent safety audits, not pre-release hype cycles. Second, governance: six countries and agencies have already banned DeepSeek from government devices, a procurement signal that enterprise buyers should weigh independently from any technical assessment.
“If your vendor evaluation process would not survive the same scrutiny applied to government purchases, revisit your criteria before the next release cycle begins, or you are pricing risk against a product announcement, not a product.”
Thomas Randall, Research Director at Info-Tech Research Group says:
![]()
“The focus on Deepseek using Claude’s outputs invites uncomfortable introspection, given that the entire foundation model industry sits on training data practices that remain opaque and legally unsettled. Every major provider globally has faced legitimate questions and singling out Deepseek is more geopolitical convenience than principled concern. It is easier to demand transparency from a competitor operating outside one’s jurisdiction (especially a regulatory regime with fewer IP enforcements).
“That said, if Deepseek is distilling from existing models, they are short-circuiting innovation by leveraging other models in a regulatory regime that has fewer IP restrictions. It comes across as counterfeiting rather than undermining the premise that building AI is expensive (it is!).”
Aamir Qutub, Founder & CEO, Enterprise Monkey says:
![]()
“DeepSeek’s situation perfectly illustrates the three tensions that will define AI’s next chapter: geopolitical control, intellectual property, and the democratisation of powerful technology.
“On the Nvidia chip allegations: Export controls were always going to be a game of whack-a-mole. The real question isn’t whether DeepSeek accessed restricted hardware – it’s whether chip-level sanctions can ever work when the underlying knowledge to build competitive models is freely available. We’re seeing the limits of hardware-based AI governance in real time.
“On the Claude data accusations: This is the AI industry’s open secret. Model distillation – training smaller models on outputs from larger ones – sits in a legal and ethical grey zone. If these allegations hold, it won’t just be a DeepSeek problem. It will force the entire industry to reckon with how training data provenance is tracked and enforced. Expect this to become a major regulatory battleground in 2025-26.
“On the broader implications ahead of V4: The panic isn’t really about DeepSeek. It’s about the realisation that AI capability is diffusing faster than anyone predicted. When a Chinese lab can reportedly match frontier models at a fraction of the cost, it challenges the assumption that massive capital expenditure equals competitive advantage. For businesses I advise at Enterprise Monkey, the practical takeaway is clear: don’t bet your AI strategy on any single provider’s moat. Build flexible architectures that can swap between models, because the competitive landscape is shifting quarterly, not annually.
“The regulatory response needs to move beyond hardware restrictions toward output accountability – who trained on what, and how can we verify it? That’s the conversation that matters now.”