The Trump administration’s new plan, Winning the AI Race, sets out how the US government wants to lead in AI. It lays out more than 90 federal actions. These cover everything from sending AI systems overseas, building more data centres, and stripping back federal rules that get in the way of development.
One of the most talked about ideas is the export of full stack AI packages. These bundles include everything from chips and code to apps and standards. The White House wants ‘American-made’ systems to be used in other countries, especially those that the US calls allies.
Inside the country, the plan supports faster approvals for building data centres and chip plants. It also calls for a bigger workforce in areas that are usually overlooked, like electricians and heating technicians. These workers are needed to keep new facilities up and running.
Another section deals with government contracts. From now on, federal departments must buy from AI companies that keep their systems “objective” and free from what officials describe as “top-down ideological bias.” The White House claims this is about protecting free speech in frontier models.
What Has The UK Decided To Do With Its Plan?
The UK’s AI plan, published as the AI Opportunities Action Plan, takes a quieter route. The focus is more on research, long term investment and public projects. It starts with computing power. Government wants to grow its national AI computing supply 20 times bigger by 2030.
This includes building new supercomputers in Cambridge and Bristol. From early 2025, these will be available for researchers and small businesses. The government has also extended the life of its current top computing system at Edinburgh University until late 2026.
A project called AI Growth Zones is also on the table. These are physical areas where AI-related buildings, like data centres, can be set up more easily. The first one is planned for Culham, at the UK Atomic Energy Authority. It will begin with a 100MW data centre, which could scale up over time. The site will be run through a public-private setup.
Instead of cutting rules, the UK wants to influence how AI fits within its legal system. It plans to announce rules on copyright and safety that support both researchers and the creative sector. Energy use is also part of the conversation. A new AI Energy Council will look into how to power AI projects in cleaner ways, such as using small nuclear reactors or renewables.
More from Artificial Intelligence
- Can AI Replace Your Therapist? The Importance of Human Connection in Therapy
- Experts Comment: How Will AI Impact The Film Industry
- How Is AI Designing New Medicine?
- Experts Comments: What Should Other Countries Take From Denmark’s New AI Laws?
- AI Appreciation Day: How Different Companies Find Value In AI Tools
- How Does Grok Deal with Controversial Questions?
- How Is AI Impacting Talent Acquisition?
- How Are Different Countries Around The World Adopting AI?
How Do Experts Think The Two Countries Compare?
Arshad Khalid, Technology Advisor at No Strings Public Relations, said, “The UK and US AI plans reflect very different approaches to regulation and innovation. The UK is focusing heavily on protecting users, especially minors, by introducing strict rules like age verification for online content.
“This shows a precautionary approach, prioritising safety and ethical concerns even if it means more regulation. The US plan, especially under the Trump administration, leans towards deregulation and pushing for rapid AI development and global leadership. It focuses more on economic growth and less on stringent controls.
“Both countries share a goal of maintaining technological leadership, but the UK’s method is more cautious, while the US prioritises speed and competitiveness. There’s value in both approaches. The US could learn from the UK’s emphasis on safeguarding users, which is essential to maintain public trust in AI. Meanwhile, the UK might consider the US focus on fostering innovation to avoid stifling development with too many restrictions. Balancing safety with growth will be key for future AI policies worldwide.”
Rhys Merrett, Head of Technology at The PHA Group, said, “Both jurisdictions are actively pursuing strategies to become global hubs for AI innovation, moving beyond the exploration stage of AI application to the actual implementation across the private and public sectors. President Trump’s Executive Order which revokes policies and directives which act as barriers to American AI innovation in favour of US leadership has set the conditions to explore new AI innovations through decentralisation. This comes with added risk, namely a lack of safety and protections, and the challenge of AI innovation lacking direction as to how it should to deliver outcomes.
“While nowhere near as extreme, the UK has seemed to follow a somewhat similar approach, prioritising economic growth and innovation over strict regulatory controls or ethical oversight which reflects the EU’s approach. This was demonstrated it declined to sign the Paris Summit Declaration on Inclusive and Sustainable AI. However, the UK knows it cannot compete with the US on a levelled front, which puts it in an interesting position.
“In one instance, it could continue to follow the US approach, while integrating elements of the EU’s strategy which favours regulation to deliver the effective protection and roll out of AI solutions through measured initiatives. It’s important to acknowledge that as much as the US and UK are like-minded entities, they still consider each other as competitors, meaning they need to each forge different strategies.”
Bill Conner, CEO, Jitterbit, says, “As we have seen with other disruptive technologies, the competitive AI arms race will soon impact the global economy while influencing technical innovation, productivity, market efficiencies and the actual GDP of countries.
“Investing in AI is critically important, but overly aggressive policy cannot compromise AI accountability, transparency and data privacy. To lead in AI, the U.S. government must lead with principles. Responsible AI governance isn’t a side note — it’s the foundation of lasting global influence.
“Accelerating infrastructure and easing environmental and export regulation bottlenecks may offer the U.S. government an early advantage, but long-term sustainability will depend on the measured and provocative implementation of AI accountability into critical systems at home and abroad.
“This isn’t only a global AI arms race for processing power or chip dominance. It’s a test of trust, transparency, and interoperability at scale where AI, security and privacy are designed together to deliver accountability for governments, businesses and citizens. Without clear accountability frameworks, exporting AI risks creating vulnerabilities — turning a strategic asset into a liability, particularly when adversarial actors are quick to exploit weaknesses or manipulate systems to their advantage.
He continued, “The U.K. must find a way to protect fundamental rights without sidelining AI innovation. Regulation should serve as an enabler, not a constraint. The real opportunity lies in building AI accountability frameworks that promote secure, ethical data usage, without paralysing the digital business models that rely on it.
“The U.K. government’s AI Opportunities Action Plan is a step toward turning AI ambition into actionable efficiencies, but trust will only come from consistent, transparent integration with AI accountability at the core. For governments, the challenge now isn’t just strategic alignment, it’s execution at scale, within clear guardrails.”