The UK government wants to move faster than any other G7 country when it comes to adopting AI. Chancellor Rachel Reeves set out a £2bn public investment programme designed to speed up AI use across business and public services.
Reeves presents it as a choice about control. She said: “The choice is this: we can bury our heads in the sand and leave [AI] to other countries – whose values may differ from ours – to shape and own this technology.” Her view is that Britain should decide how AI develops, instead of importing systems and standards from elsewhere.
The money will back AI research and the computing power needed to use it at scale. A national quantum computing programme is a main part of the proposal. These systems are expected to increase processing capacity for complex data, with potential gains in medical diagnostics, energy systems and secure communications.
Ministers also want faster uptake in the NHS, local government and private companies. The Oxford Cambridge innovation corridor is expected to act as a hub for research, skills and startup activity, with the intention of spreading economic gains beyond London.
So… Are Businesses Ready For That?
Government enthusiasm comes at a time when many companies are already using AI tools such as ChatGPT, Gemini, Claude, LLaMA based models and DeepSeek. But new research is saying that usage does not automatically translate into confidence or productivity.
Censuswide surveyed 1,000 UK business decision makers for UnlikelyAI. It found that 87% say they trust AI outputs. But with that, 99% say they check those outputs. Employees in large organisations spend an average of 2 hours 41 minutes each week using AI, compared with 2 hours 30 minutes verifying, checking or redoing what it produces.
UnlikelyAI calculates that this verification time represents more than £29bn in unrealised productivity every year across UK organisations with 250 or more employees. In other words, workers are spending almost as long policing AI as they are benefiting from it.
Just 57% report seeing any return on investment from AI so far. Only 22% say that return is substantial. Meanwhile, 13% say they have not seen a positive ROI and do not expect to either. Around 53% say employees spend as much time checking AI as they do using it.
The survey also found about 51% saying validating AI outputs is frustrating. Around 32% report “AI burnout”, described as mental fatigue from constant checking. About 30% report “AI blindness”, meaning they lose perspective on output quality after repeated prompting and inconsistent answers. Only 19% say AI makes them feel energised and empowered at work.
Why Is Confidence In AI Shaky Among The Surveyed?
Respondents gave practical reasons…
Around 32% say they cannot explain how AI systems generate answers. The same proportion say they do not know where their data goes or how it might be used. About 31% feel unsettled when identical prompts generate different answers. Another 31% report factual or logical errors, and 28% report hallucinations where AI confidently invents information.
William Tunstall-Pedoe believes the issue is in how tools are used. He said: “These findings highlight a critical challenge: there has to be a better way to use AI. LLMs have strengths in specific, limited areas, but there’s a huge lack of understanding about when to use them and when to look to other, less-fallible models. That’s where this trust gap is coming from.”
More from News
- Top Anti-Drone Startups And Companies
- OpenAI Scales Back On Instant Checkout Feature: What Does This Mean For Agentic Commerce?
- How Much Internet Freedom Do Different Countries Actually Have In 2026?
- Ofcom’s Wholesale Pricing Rules Risk Creating A UK Broadband Duopoly
- Could China Really Switch Off Nigeria’s Satellites? The $11.44M Dispute Explained
- Your Food Delivery Services May Be Impacted By The Global Oil Crisis, Here’s How
- From 19 March, Debit And Credit Card Users Can Control Their Contactless Limits In The UK
- Why Are Anti Drone Patents Up 27% In A Year?
He thinks companies need stronger internal discipline around AI. On workplace standards, he said: “Set ground rules within teams for when AI is and isn’t appropriate. Clarity helps eliminate anxiety and uncertainty – so people are free to perform better, in the knowledge they’re using a tool responsibly and within guardrails.”
He also argues that design is important.
“For example, at UnlikelyAI, we combine neural networks with symbolic reasoning to create models that are fully accurate, consistent and, most importantly, can explain every decision they make – so users can fully trust the output. This is something pure LLM solutions currently can’t do, so it’s important businesses understand these limitations,” Tunstall-Pedoe added.
”The most powerful AI is not necessarily the fastest or most complex – it’s the one that gives you certainty. Choose tools that produce consistent, verifiable outputs, that tell you when they can’t find an answer, and that leave a transparent audit trail. When you’re in a high-stakes business context, the long-term ROI on trustworthiness far outways the short-term gains of speed alone.”
Can The UK Speed Up Safely?
Business leaders do generally welcome the government’s direction, but they [rightfully] want care as adoption starts to pick up.
Stuart Harvey said: “The UK’s continuous technological evolution is vital during a time where G7 and world counterparts strive ahead with AI development, but we can’t just push AI adoption for AI adoption’s sake. Organisations are already beginning to delegate decisions to AI systems, which is fine for less consequential tasks such as chatbot recommendations, but if that’s directing a government policy or dictating a business’s expansion strategy, the risks escalate rapidly.”
“Without the right data infrastructure, these decisions become unexplainable, un-auditable and unreliable. The next generation of AI infrastructure must look beyond the simple attraction of AI ease and focus on the integrity of AI decisions. The wrong decisions, if made due to rushed AI adoption, could impact millions of people and cost the economy billions,” he said.
Sachin Agrawal also backs the direction of travel. He said: “The government’s ambition on AI is both clear and commendable and a strong signal of intent in an increasingly competitive global landscape. The next step is being more targeted about where investment will have the greatest impact, upskilling regional talent, strengthening infrastructure to boost processing power, and ensuring data is managed responsibly.”
On regulation, he added, “Effective AI regulation isn’t about slowing progress, it’s instead about ensuring systems can be interrogated, challenged and improved. If businesses and governments cannot explain how an AI reached a conclusion, they are exposing themselves to legal, financial and reputational risk. Public confidence will ultimately determine how far and how fast AI can scale. Without clear standards and enforcement, a handful of high-profile failures could set back adoption far more than thoughtful regulation ever would.”
The UK wants to move first and move fast within the G7. Now, it’s about whether or not confidence in the technology can keep pace with political intent.