From Stargate to DeepSeek: Experts Comment On What This May Mean for the Future of AI and Data Privacy

There’s a lot going on in the worlds of tech and business at the moment, from China’s antitrust probe into Google to the myriad of executive orders that have recently been signed by President Donald Trump.

But, among the most talked about headlines are the announcement of DeepSeek (and the massive fallout that followed) and Trump’s announcement of the $500 million investment in Stargate.

Both the DeepSeek debacle and the Stargate sage are set to have a massive impact on the global AI industry and already, the effects are being felt far and wide.

So, what’s really going on in the world of AI, and are these events really that monumental?

 

The DeepSeek and Stargate Affairs Explained

 

The DeepSeek and Stargate news from last week has dominated the news cycle for a while now, and while they’re not directly linked, they both represent really important issues within the world of AI and the future of the industry.

So, what actually happened and why do these things matter?

Let’s start with the latter. Last week, recently inaugurated President Donald Trump announced a whopping $500 million investment in Stargate – a joint venture between OpenAI, Oracle, SoftBank and MGX – that aims to develop OpenAI’s infrastructure by building a plethora of new data centres all over the United States over the next four years.

This is significant for a few reasons. First, this is a massive investment for the industry and OpenAI specifically, creating huge potential for the future of AI technology and moving one step closer to the fabled AGI (Artificial Generative AI).

Second, the increased development of data centres is of concern for environmentalists as well as those considering the long-term viability and sustainability of the AI sector as data centres are notoriously massive consumers of energy.

Finally, Elon Musk has made a big deal about not supporting this investment in OpenAI. Trump has brushed this off by saying that Musk’s issue is more of a personal one with OpenAI founder Sam Altman with whom he’s had both professional and personal clashes. However, Musk maintains that that the investment is a poor decision, because Stargate dos not actually have as much financing as it claims to have.

So, that’s a really crude summary of the Stargate situation but what about DeepSeek?

DeepSeek is a Chinese AI company that develops open-source large language models and in the wake of its launch in 2023, it was finally released to the public on the 20th of January. But why is this such a big deal?

Well, the reason everyone’s freaking out about DeepSeek (and let’s be honest, this isn’t an exaggeration) is that it claims that it has the exact same capabilities as OpenAI, but for way cheaper, using far less energy and under less stringent restrictions.

While the long-term impact of this low-cost AI model is yet to be seen, the announcement of its existence has already sent shockwaves not only through the AI industry but through Wallstreet too. Indeed, the immediate and most noteworthy result was the drop in tech stocks, specifically Nvidia.

Nvidia, a multinational tech company that has become a leader in AI took the biggest dive, experiencing a shocking 17% drop, equating to nearly $600 billion in losses – the largest ever one-day drop for a US company.

Unsurprisingly, however, the stock markets have recovered somewhat, but DeepSeek’s potential in the industry is still just daunting and very much up in the air.

The public launch of DeepSeek and massive investment in Stargate aren’t specifically linked, but they both have the potential to have a massive influence on not only AI in the US economy, but in the global AI arms race.

But, the question plenty of people are asking now, especially in light of the recent Data Protection Day, is, what do these things mean for online security and data privacy?

 

The Data Privacy Dilemma: Progress Vs. Security?

 

Both the investment in Stargate and release of DeepSeek’s technology mark significant progress in not only AI technology, but the move towards the development of AGI.

However, as innovation continues to progress in the industry, other issues become more pertinent than ever before, and one of the most important ones is data protection.

With a potential big shift towards DeepSeek’s tech that’s based in China, lots of questions have croped up regarding how secure the data will be, what it’ll be used for, how Chinese regulations may differ, what th data retention policies are and more.

Plenty of industry experts are concerned by what has been described as a cheap solution that doesn’t necessarily protect users and data privacy. Some assert that this may be a grey-area issue centered on differences between Western standards and those held by China, while others are a bit more straightforward in their opinions.

For instance, Alastair Paterson, CEO and Co-Founder at Harmonic Security, says that “DeepSeek doesn’t even pretend to protect data”.  He adds that “this is a real problem and one that exacerbates existing problems with AI adoption.” Indeed, he’s not wrong – increasing accessiblity to advanced AI technology is exciting, but it opens doors to a great deal of risk too, and he consequences could be devastating.

Ultimately, however, there’s no stopping the creation of new, more cost-effective technology. Rather, the solution is going to be to tread lightly.

Lauren Murphy advises that while innovative AI tech is exciting and most welcome, caution always ought to be a top priority. So, she says, we need to find a way to “stay optimistic, but stay informed; make sure you do your due diligence to understand the implications before making the leap.”

Let’s see what else our experts have to say.

 

 

Our Experts

 

  • Sarah Murphy: General Manager of EMEA at Clio
  • Lauren Murphy: Founder & CEO of Friday Initiatives
  • Bill Conner: CEO of Jitterbit
  • Jacob Beswick: Dataiku’s Director of AI Governance
  • Chris Anley: Chief Scientist at NCC Group
  • Ben Goertzel: Creator of Desdemona, CEO of the Artificial Superintelligence (ASI) Alliance and the Founder of SingularityNET
  • Alastair Paterson: CEO and Co-Founder at Harmonic Security
  • Andrey Korchak: Serial Entrepreneur and CTO
  • Alastair Anderson: VP of EMEA at Protegrity
  • Ellen Benaim: CISO at Templafy

 

Sarah Murphy, General Manager of EMEA at Clio

 

lauren

 

“The rapid emergence of AI tools like China’s DeepSeek highlights how swiftly technology is transforming industries- including the legal sector- while raising important questions about trust, security, and the handling of sensitive data.

Clio’s recent Legal Trends Report revealed that 96% of UK lawyers are now incorporating AI tools into their workflows in some way, creating unprecedented opportunities for small firms and sole practitioners to compete with larger firms by streamlining operations and focusing on high-value client work.

However, as these technologies evolve, so too does the responsibility to adopt them thoughtfully. Legal professionals must prioritise solutions designed with compliance and confidentiality at their core. Client trust is paramount and mishandling sensitive information risks reputational damage that no amount of innovation can repair.

Tools like DeepSeek serve as a timely reminder that while AI has the potential to revolutionise the legal profession, its adoption must always align with ethical standards and a commitment to client care.”

 

Lauren Murphy, Founder and CEO of Friday Initiatives

 

lauren-murphy

 

“I see both promise and risk in DeepSeek’s offering. While cost-effective AI excites us, key concerns remain – data jurisdiction, governance transparency and regulatory hurdles. Compliance with GDPR, CCPA and national security laws is crucial, as is ensuring model explainability and bias mitigation. Enterprises must carefully assess the risks of integrating AI models trained under different regulatory frameworks.

“While innovation is welcome, due diligence is essential, especially for sensitive data. Stay optimistic, but stay informed; make sure you do your due diligence to understand the implications before making the leap.”

 

Bill Conner, CEO of Jitterbit

 

bill-conner

 

DeepSeek potentially presents a new level of threat to enterprises, businesses and governments globally. Its seemingly overnight popularity and free-to-use AI model make it look like an innovative new arrival and serious swap-out competitor to well-funded LLMs from its well-trusted rivals. In reality, DeepSeek represents a clear risk for any enterprise whose leadership values data privacy, security and transparency.

Proactive and privacy-minded enterprises should do strict due diligence with all LLMs and AI services, not just DeepSeek. But in this case, and as stated in their own privacy policy, DeepSeek is a shared cloud service run in China with data being stored in China — potentially introducing unknown risks to data privacy, compliance mandates and security controls.

AI innovation is moving at a rapid pace. Are CEOs, business leaders and high-placed officials ready to jeopardize the sanctity of their data without the proper cautions? Enterprises will want to jump on the latest AI technology to keep pace, but they must remain prudent for long-term sustainability.”

 

Jacob Beswick, Dataiku’s Director of AI Governance 

 

jacob-beswick

 

“DeepSeek’s performance has caught a lot of attention – from consumer and business users. If history is any indication, we can expect a lot of individuals to interact with V3 in many different contexts. There are some key things that the UK government and the AI Safety Institute should consider:

What is DeepSeek’s data retention policy with respect to user inputs? Related to this, what are the rights held by DeepSeek to leverage user inputs to train their model? In the context of China’s different data-regime, what data has gone into training their model? And are we individually, collectively, and organisationally, aligned to whatever approach they’ve taken?

Ultimately, the government needs to be asking itself whether DeepSeek aligns with its collective expectations when it comes to AI safety, security, and risk. These are questions that should be asked of any provider, Chinese or otherwise. This is where the AI Safety Institute can add value.”

 

Chris Anley, Chief Scientist at NCC Group

 

chris-anley

 

“We have already seen cyber attackers try to take advantage of the surge in DeepSeek users. The excitement around new technology shouldn’t overshadow the importance of safeguarding personal and sensitive information. It is critically important that users remain cautious about potential data privacy and security issues.

“The scramble among US and UK tech companies to fast-track AI model development in response to competition could result in shortcuts being taken on security and privacy protocols. Ensuring these measures aren’t compromised is critical as the industry races ahead.

“In a rapidly evolving regulatory environment, where rules struggle to keep pace with the global development of AI technology, both businesses and the public need to take proactive steps to ensure security, privacy, and accountability. Protecting against risks must not be an afterthought.”

 

Ben Goertzel, Creator of Desdemona, CEO of the Artificial Superintelligence (ASI) Alliance and the Founder of SingularityNET

 

Dr.-Ben-Goertzel

 

“I don’t think DeepSeek brings us one millimeter closer to Artificial General Intelligence (AGI), but I do think it brings us closer to commercially viable large language model (LLM) applications which is fantastic.

DeepSeek remains a transformer neural net and has all the profound cognitive limitations that cognitive scientist, Gary Marcus, and other commentators have noted in all transformers: inability to tell fact from hallucination, lack of a coherent world model, lack of true meta-cognition and understanding of itself and others, inability to build compositional abstractions and ground them in reality and so on.

I would say DeepSeek probably represents LLMs moving from a phase of intelligence advance toward a phase of efficiency optimisation, and the next huge intelligence advances will probably come from other paradigms like neural-symbolic systems or neuromorphic computing.”

 

Alastair Paterson, CEO and Co-Founder at Harmonic Security

 

Alastair-Paterson

 

“China spent many years launching cyber attacks and stealing our IP, now our employees can cut out the middleman and upload it to them for free. Employees will use whatever tool helps them in their job the most. No amount of blocking will stamp out its use.

Just as employees are bypassing controls with ChatGPT, so too will they do it with DeepSeek – and they’ll potentially send confidential data straight to China. DeepSeek doesn’t even pretend to protect data – its privacy policy clearly states that customer data is used to train models and that the data resides in China. This is a real problem and one that exacerbates existing problems with AI adoption.

So much has changed in just a week or so, that we can expect this space to evolve rapidly. China has already announced one trillion Yuan investment as part of an AI plan to rival Project Stargate. This will mean that there are going to be more models and services that emerge from China, such as Kimi. Qwen is another example of an AI company with both models on HuggingFace and a chat interface. If you care about data privacy implications, you should probably care about Kimi and Qwen, too.”

 

Andrey Korchak: Serial Entrepreneur and CTO

 

andrey-korchak

 

“One reason China can advance more quickly in certain industries with smaller budgets is its highly flexible personal data laws. Many things can be done with people’s data without explicit permission, reducing data acquisition costs, minimizing legal risks for tech companies, and enabling experiments that would be nearly impossible in the US or EU.

With the AI race between the US and China now a top priority for the US government, it’s unlikely we’ll see additional regulations on private data anytime soon. Some may advocate for more open data policies, but it’s hard to predict when or if that will happen. I’d wager that certain restrictions will be eased once China achieves significant breakthroughs in this field — despite operating on a fraction of the US budget.”

 

Alasdair Anderson, VP of EMEA at Protegrity

 

Alasdair-Anderson

 

“The DeepSeek and Stargate announcements confirm that the AI race is now a global arms race, which will undoubtedly increase the pressure on data privacy. The scale and speed of AI development means that personal data is being processed, reused, and analysed in ways that outpace current regulatory frameworks. DeepSeek’s terms notably lack any mention of anonymisation, and both DeepSeek and OpenAI reuse the personal data sent to them, raising significant concerns about data sovereignty and user control.

For businesses, this underscores the need to adopt proactive data protection strategies—relying on encryption alone is not enough. Tokenization, masking, and controlled data anonymisation are critical to ensuring that even if AI developers are breached or exploited, they cannot expose sensitive information. As AI-driven threats evolve, organisations must prioritise data minimisation and desensitisation, not just for compliance but to mitigate the growing risk in the age of an AI arms race.”

 

Ellen Benaim CISO at Templafy

 

ellen-benaim

 

“The introduction of DeepSeek to the AI market is a welcome one and marks the beginning of innovation to make AI more resourceful and more widely used. However, it undoubtably puts pressure on other AI providers to do more with less.

From a security perspective, there are some concerns, as we’ve already seen one critical vulnerability present, leaving a database of usage data left wide open. Businesses of all sizes should carefully consider which AI they are using, and what data is being inputted – speed is not everything, accuracy and security must be accounted for too.”