⁠UnlikelyAI Opens London Research Lab, Here’s Why

UnlikelyAI has opened a new research lab in Holborn, London, to tackle one of artificial intelligence’s toughest questions, and that is, of course, trust. The company announced that the UnlikelyAI Lab will be backed by millions in investment and led by Oxford academic Callum Hackett, a linguistics and AI researcher known for his work on explainable technology.

The lab is being built to make AI systems accurate, auditable and understandable. Its work will focus on regulated sectors such as finance, law and healthcare, where explainability and traceability are essential. According to UnlikelyAI, businesses using AI often face problems verifying how models arrive at their results. The Holborn lab will address this by creating AI that can show its reasoning step by step.

The company said the lab’s research will develop “neurosymbolic” systems that combine machine learning with symbolic reasoning, which is a method that uses logical rules that humans can follow. This approach aims to bridge the gap between how large language models generate information and how organisations need to verify it. In practice, that could make AI decisions clearer to regulators, auditors and customers.

Hackett says: “AI has become something that businesses stake their reputation on, so we simply can’t settle for ‘good enough’. It’s becoming clear that LLMs are approaching their scaling limits, and enterprises need a different path forward.

“We’re launching this lab to scale trustworthy AI by evaluating it in a way that guarantees it works for specific industries and markets, and to pioneer our neurosymbolic approach by combining LLMs with symbolic reasoning. We’re relentlessly pushing to raise the standards for enterprise AI and deliver AI that is truly enterprise-ready.”

Hackett will lead a team of engineers and researchers working on verification methodologies and regulated AI applications. He said the lab’s work would build formal methods and benchmarks that are open to the public, helping companies assess AI systems according to their business needs. The group’s first year will centre on proving accuracy and building transparent systems for enterprise use.

 

 

Why Is This Needed?

 

The lab’s launch also follows increasing calls from regulators for AI systems to be explainable. The Financial Conduct Authority has highlighted accountability in AI for financial services, and similar requirements feature in the EU’s AI Act. KPMG’s research shows 61% of people are wary of trusting AI and 67% report low to moderate acceptance. UnlikelyAI believes businesses need systems they can verify before widespread adoption can happen.

The company has already tested its approach with Lloyds Banking Group and SBS Insurance Services. In a pilot with SBS, UnlikelyAI’s system achieved 99% precision while keeping full audit trails. This, the company said, shows that clients are ready for trustworthy AI in daily operations.

William Tunstall-Pedoe, Founder and CEO of UnlikelyAI, says: “Enterprises today are having to choose between accuracy and performance when it comes to AI. In regulated industries, this is a critical barrier to adoption. Our new London lab will accelerate our development of accurate, explainable AI for those businesses, and I couldn’t be more excited to discover the impact it will have on our clients and the AI industry as a whole.”

 

How Does The UK’s New AI Growth Lab Fit Into This?

 

The launch of UnlikelyAI’s lab arrives as the UK government calls for evidence on its proposed “AI Growth Lab”, announced last week by the Department for Science, Innovation and Technology. The Growth Lab would serve as a sandbox; this is a controlled environment where businesses can test AI systems that current regulations may restrict.

The government said the project would help the UK keep pace with global innovation and attract investment into responsible AI development. The OECD estimates that artificial intelligence could add between 0.4 and 1.3 percentage points to productivity growth over the next decade. If achieved, that could be worth between £55 billion and £140 billion to the UK economy every year by 2030.

The AI Growth Lab will test new ways to regulate AI in sectors such as healthcare, planning, robotics and professional services. It aims to cut waiting times for regulatory approval and let businesses trial AI under supervision. According to DSIT, companies in regulatory sandboxes have historically brought products to market 40% faster. The plan is for the Lab to operate under strict safeguards, granting temporary regulatory exemptions while keeping consumer and safety protections intact.

The proposal follows feedback from UK businesses, 60% of which told DSIT that regulation is a barrier to adopting AI. The department said it wants laws that keep up with modern technologies without weakening trust or safety. The Growth Lab would collect evidence on where regulations can be safely modified and use that information to inform permanent updates to UK law.

The government has drawn inspiration from earlier sandboxes like the Financial Conduct Authority’s fintech programme launched in 2016, which was replicated in countries such as Japan, Singapore and the USA. It also builds on the Medicines and Healthcare products Regulatory Agency’s AI Airlock, which tests AI tools in clinical settings to understand their regulatory needs.

Through the new Growth Lab, ministers hope to give innovators more freedom to test advanced AI while maintaining public oversight. DSIT said this could “unlock billions” in economic value by 2035 through improved regulation and faster deployment of trusted technologies.