Expert Predictions For DeepTech In 2026, Part 2

deep-tech-predictions

Deep tech’s next chapter won’t be defined by a single breakthrough or dominant technology. Instead, 2026 is shaping up to be the year when deep tech becomes harder to categorise, more deeply entangled with geopolitics and far more visible to everyday businesses and consumers.

While part one of our deeptech predictions for 2026 explored autonomy, infrastructure and verification, the conversations now emerging point to a different set of pressures.

As deep tech matures, founders and investors are grappling with how innovation moves out of specialist circles and into markets shaped by regulation, talent shortages and global competition. The next wave will be less about what’s technologically possible, and more about who can execute, scale and sustain it.

Indeed, we’re shifting into a future of deep tech that may very well face more practical, logistical constraints that technological, innovative ones.

 

Deep Tech Will Become a Strategic Asset, Not Just a Science Project

 

One of the clearest shifts expected in 2026 is how deep tech is perceived within organisations. Rather than being treated as experimental or exploratory, deep tech capabilities are increasingly being positioned as strategic assets tied directly to competitiveness, resilience and national interest.

And this change is being driven by necessity.

As supply chains fragment, energy systems strain and digital infrastructure becomes more contested than ever before, advanced technologies are no longer optional. Deep tech in areas like advanced materials, computing, energy systems and applied AI is becoming foundational to how economies function, not just how startups innovate.

As a result, governments, enterprises and investors are likely to become more selective and more involved. Funding will increasingly favour technologies with clear paths to deployment, regulatory alignment and long-term relevance. Ultimately, the bar for credibility will rise, pushing founders to balance scientific ambition with operational discipline.

In practice, this could mean fewer speculative shots in the dark, so to speak, and more focused deep tech companies designed to plug into existing systems. The distinction between “research-driven” and “market-driven” innovation is expected will very likely become more blurry than ever before, reshaping how deep tech companies are built and backed.

 

Will Talent, Not Technology, Become the Limiting Factor for DeepTech?

 

While much of the deep tech conversation centres on breakthroughs, many experts believe the real bottleneck in 2026 will be people rather than technology. As advanced systems spread across industries, demand for specialised talent is outpacing supply, particularly at the intersection of science, engineering and commercial execution.

Deep tech teams increasingly need hybrid skill sets – that is, individuals who understand complex systems but can also deploy them responsibly at scale. This is proving difficult in a global market where experienced talent is scarce and competition is intense.

In response, it seems likely that organisations are expected will rethink how they attract, train and retain deep tech talent. Traditional hiring pipelines may give way to more interdisciplinary teams, internal upskilling programmes and partnerships with universities or research institutions. And at the same time, the geographic concentration of expertise could soften as remote collaboration and distributed research become more viable.

This talent dynamic is also likely to influence where deep tech innovation happens. Regions that invest early in education, infrastructure and ecosystem support may gain an outsized advantage, even if they’re not traditional tech hubs. In 2026, the ability to assemble and sustain the right teams may matter as much as the underlying technology itself.

And one of the most exciting things about this is that innovation and success in deep tech may finally go beyond the usual geographical monopolisation – that is, countries and regions that haven’t really been able to compete in the past may actually stand a chance now. Indeed, there’s potential for a type of democratisation of deep tech innovation that hasn’t really been seen before.

 

 

Our Experts

 

  • Eoin Hinchy: Co-Founder and CEO of Tines
  • Alex Gusev: CTO of Uploadcare
  • Balaji Krishnan: CEO and Founder of Displace
  • Jason Hardy: CTO of AI for Hitachi Vantara
  • Matt Kunkel: CEO and Co-Founder at LogicGate
  • Martin Brock: Chief Technology Officer, Cambridge Consultants
  • Tim Ensor: GM Intelligence Services at Cambridge Consultants
  • Nix Hall: CTO of New Wave Biotech
  • Duncan Curits: SVP of GenAI and Product at Sama
  • Rohith Devanathan: Founder of ScrubMarine
  • Jonathan Cleave: Group MD of consultancy Intralink
  • Toby Harper: Founder and CEO of Harper James
  • Phelim Bradley: Co-Founder and CEO of Prolific

 

Eoin Hinchy, Co-Founder and CEO of Tines

 

eoin-hinchy

 

“Prediction 1: In 2026, the companies that succeed with AI won’t be the boldest; they’ll be the ones with real guardrails. The question will shift from “Can AI do this?” to “Should AI do this?”

“2025 was the year of experimentation. Looking ahead to 2026, curiosity will give way to commitment as enterprises start to rely on AI agents as business-critical tools. But this psychological shift – moving from testing agents to trusting them – will widen between those who succeed and those who don’t because of one defining factor: security and governance. Companies that invest upfront in defining clear controls and guardrails will unlock the transformative productivity gains that have long been marketed. Those that rush to deploy without proper oversight, on the other hand, will face public failures that damage their brand and erode trust. Flashy demos may impress, but they rarely endure. The next phase of AI maturity depends on learning to delegate responsibility. Governance is not a box to check; it is the strategy. Those who understand this will turn AI from theater into lasting impact.

“Prediction 2: CFOs will kill more AI projects than CTOs launch, as the era for AI for innovation’s sake ends and budget holders demand proof.

“Enterprises are reaching the end of the “AI for AI’s sake” era, and this will crystallize next year when finance teams stop politely nodding at AI roadmaps and start demanding P&L impact in quarters, not years. The vendors who survive will be those who can answer one simple question: What specific salary expense does this replace, or revenue will this generate? A sharp divide will form between vendors offering quantifiable cost reduction and those offering aspirational transformation, and only one will survive procurement.

“Prediction 3: The most valuable AI agents won’t be the ones that you ask questions to, but the ones that alert you to problems you didn’t know existed.

“The next wave of AI innovation will be defined by agents that act before they’re asked, but the real differentiator will be how effectively humans stay in the loop. These systems won’t wait for prompts; they’ll monitor markets, compliance landscapes and customer signals in real time, surfacing insights and taking action autonomously. Yet human judgment remains critical, providing the context, ethics and nuance that AI cannot replicate. As organizations scale, they must design systems where oversight is built in, not bolted on, with clear frameworks defining when and how people step in and remain accountable for outcomes. The companies that get this balance right, where humans and machines operate in true tandem, will build the trust and integrity needed to stay ahead of the curve.”

 

Balaji Krishnan, CEO and Founder of Displace

 

balaji-pic

 

“The way we buy televisions will forever change in the not-so-distant future. Currently, the reasons we buy TVs center on tech features like screen resolution, size, OLED, QLED, etc. It’s also been the biggest marketing tool and is the primary driver of TV sales. In the next couple of years, we’ll start seeing consumers making TV purchases similarly to how we purchase iPhones.

“Much like the revolutionary phone, we will start to look at buying TVs based on performance capabilities, hardware storage, and memory. Screens are great because playing videos and high-quality images are essential, but they go beyond that. It will shift from being just a big screen that projects pictures to a computer on the wall that can handle complex actions and tasks.”

 

Dr. Alexander Kihm, Founder at POMA

 

alex-kihm

 

“Deeptech will see further decline in the business model of selling “models,” with the focus shifting to systems that can be trusted, audited, and run cheaply at scale. Foundation models will keep improving, but rather gradually. Differentiation will likely depend on the quality of context, or how reliably the messy, unstructured data can be turned into model-ready knowledge.

“We expect a hard pivot from “RAG as a feature” to Context Engines as core infrastructure. That means multimodal ingestion, elaborate chunking, retrieval that’s measurable (coverage, latency, cost). Teams that can’t explain why an answer was produced will likely get blocked by procurement, regulation, or internal risk.

Deeptech winners in 2026 will be those with the best context pipeline, allowing them to execute with fewer tokens, fewer hallucinations, and faster iteration.”

 

For any questions, comments or features, please contact us directly.

techround-logo-alt

 

Jason Hardy, CTO of AI for Hitachi Vantara

 

jason-hardy

 

“Confidential computing, sovereign AI requirements, and data sensitivity concerns force enterprises to treat data placement as a trust and compliance issue first, cost issue second. The more valuable AI becomes, the more sensitive data it requires, creating an unavoidable tension between capability and risk that 2026 must resolve through infrastructure design. Hitachi Vantara’s AI portfolios are built with this tension in mind.

“We bring compute to data rather than data to compute, reducing exposure while enabling rich context. Features like our Time Machine capability help with explainability as data evolves. Additionally, data exposure has tangible costs, and 2026 makes this unavoidable. As AI demands richer context like PII, health data, and intellectual property, enterprises must balance value extraction against compliance risk. Trust in AI grows when you have better data governance and hallucination guardrails built into infrastructure, not just models.”

 

Matt Kunkel, CEO and Co-Founder at LogicGate

 

matt-kunkel

 

Agentic AI Isn’t Ready to Run the Show, and Caution Will Prevail

“Even as agentic AI dominates headlines and many companies begin to leverage its basic use cases, businesses will continue to exercise caution in 2026—leading to a slow rate of widespread adoption and increased human oversight. While the technology remains a common source of excitement across many different industries, it’s important to remember that agentic AI has only existed meaningfully for about two years—hardly enough time to establish the trust needed to turn mission-critical tasks over to autonomous AI agents. As organizations grapple with trust, governance, and security in 2026, agentic AI adoption will continue to mature, but it won’t achieve the sky-high adoption rates some have predicted.”

“2026 Will Be the Most Explosive Year Yet for GRC

“If governance, risk, and compliance have not yet become a staple in your boardroom conversations, 2026 will be a huge change. As AI accelerates innovation and interconnected data amplifies exposure, the next year will prove that GRC is an integral part of business profitability and resilience—in fact, business leaders won’t be able to escape the word “governance” in 2026. The smartest companies will modernize their GRC approach and connect risk, data, and accountability into one system of truth to better anticipate threats, accelerate decision-making, and build lasting resilience in an increasingly complex risk landscape. Those who don’t will find themselves outpaced, out of compliance, and more vulnerable than ever.”

“The Tension Between Business and Technical Teams Will Reach a Breaking Point

“With AI and automation poised to accelerate change in 2026, the need for communication and collaboration between business leaders and risk/security professionals will amplify. Business leaders will continue to demand speed and innovation, while technical teams will attempt to pump the brakes until effective security and compliance standards can be established. This misalignment will force companies to rethink how these teams collaborate and share accountability, making it all the more important to integrate and consolidate security, risk, and compliance data and gain a more holistic view of enterprise risk. Companies need to realize that these are not competing interests—rather, they present an opportunity to align on goals and ensure every area of the business is working toward the same ultimate goals.”

 

Martin Brock, Chief Technology Officer, Cambridge Consultants 

 

martin-brock

 

The “Exit” from the AI bubble

“AI is at a point of climax, characterised by expectations and mounting unsustainability. This bubble is likely to burst as the hype and reality get untangled to a significant extent.”

Commoditisation of Large Language Models (LLMs)

“The current generation of AI models will become utilities rather than premium differentiators. This will be very similar to internet connections. The industry will shift to delivering the same or slightly better utility or output, but at a lower energy usage or lower cost.”

Humanoid Robotics as a Hype Buffer

“Humanoid robots are a way of maintaining investor interest. They are not necessarily the right answer for all problems. A great example of this is in logistics and warehousing, often companies start with a humanoid robot and then try to apply it to a problem. Instead, companies will need to shift their use to spaces that are very human in design such as hospitals or care homes.”

Convergence between biology, AI, and physical robotic technology

“In the future we will start to see a shift where scientists no longer conduct the experiments but instead become orchestrators of machinery and AI to carry out the experiments. There is a wealth of opportunities from simulating and testing biology in the digital domain.”

 

For any questions, comments or features, please contact us directly.

techround-logo-alt

 

Tim Ensor, GM Intelligence Services at Cambridge Consultants

 

tim-ensor

 

“AI in Robotics

“For consumers, I suspect we’ll continue to see new lab based achievements and so there will continue to be excitement and hype. What will be interesting is if we really achieve a ChatGPT moment, where somebody is able to release a model which is usable across a number of different robotic platforms, which suddenly starts to put this capability actively into the hands of consumers.

“One of the bigger challenges that the industry faces is the balance between productivity and safety. The regulations for how we treat these new ranges of robotics are still being worked through and so there is a process and a set of thinking to be gone through about how to get the best out of the amazing capability that we’re seeing coming through. But to do that in such a way that we can be confident is safe to do in the same space as other humans are working in.”

Large Action Models

“Large action models are often another way of talking about Agentic AI. When we talk about using large language models combined with other capabilities like tools, memory and data, this is when we start getting into the whole field of Agentic AI. The current state I would say is deploying these systems in varying business use cases. The main focus is on accelerating software development or rather that’s the area I’m seeing the biggest uptake. And then I think the other area is in general business, and business planning. These kind of techniques are now being put into action to do that in a much more effective way. And the reason they’re much more effective is because they can cope with a higher degree of ambiguity. These large action models are much better at coping with those.”

Data Privacy and Security Innovations

“We definitely do see some challenges in being able to train AI in enterprise and government sectors settings as well on the basis of the fact that the data we need to train the models is in some way sensitive. Some examples of that would be is things like Federated learning where you train models locally at the edge and then rather than centralising all the data, you centralise the weights of the model. Other ways that we try and solve this problem is by trying to train an AI model on analogous settings. At the moment it doesn’t completely solve the problem though, and I would say that this is a challenge in those domains where the specific use case relies upon, for example, large numbers of healthcare images. One of the ways of dealing with that is that people in that specific sector are going through the necessary kind of approvals and authorisation processes to be able to get access to the data. But clearly it’s a laborious process. There isn’t unfortunately a silver bullet for how you solve this problem because managing consumer and individual data appropriately is absolutely critical.”

 

Nix Hall, CTO of New Wave Biotech

 

nix-hall

 

“We’ll see deeptech organisations make far greater use of virtual bioprocess development. Across food, materials, biochemicals and personal care, the real bottlenecks are still in downstream processing – where yield, cost and environmental impact are ultimately determined.

“What’s changing is how teams approach these decisions. Instead of relying on months of trial and error, they can now map out thousands of process options digitally and quickly understand which ones are technically, commercially and environmentally viable. It brings much earlier clarity on trade-offs that traditionally only surfaced late in the scale-up journey.

“We’ll also see economic and sustainability analysis embedded much earlier in process design. With new regulations and scrutiny over environmental claims, companies increasingly want LCA-level insight while they are still shaping the process. Together, these shifts signal a more disciplined, simulation-first approach to developing bioprocesses: fewer dead ends, more confident decisions, and faster progress toward scalable solutions.”

 

Duncan Curits, SVP of GenAI and Product at Sama

 

duncan-comment

 

“AI development will feel far more engineered in 2026. The ecosystem has been running on fragmented vendors and improvised workflows, and that approach can’t absorb the pressure created by real demand. We’ll see the supply chain tighten into something more resilient, with clear lineage from data collection through deployment and continuous model evaluation built in as standard practice. Cognitive infrastructure will sit at the center of this shift, because human oversight and data operations are now part of the innovation stack rather than supporting functions. Companies that integrate these loops will move faster and with more stability than those treating them as add-ons.

“The business side will mature as well. Model makers will need to prove real commercial strategies, which will push consolidation and raise the bar for new entrants. Responsible scale becomes the differentiator, and trust moves from a compliance exercise to a product requirement.”

 

For any questions, comments or features, please contact us directly.

techround-logo-alt

 

Rohith Devanathan, Founder of ScrubMarine 

 

headshot

 

“In 2026, deep tech will shift toward systems that can operate with more independence and judgment. Agentic AI will sit at the centre of this change. Instead of waiting for step-by-step instructions, these models will interpret goals, make informed decisions and keep workflows moving across both digital and physical domains. It marks the point where AI stops being a passive analyser and starts acting as a true operational partner.

“This evolution matters most in the physical world. Robotics will benefit from AI that can work with partial information, understand context and adapt when conditions change. That unlocks tasks that were previously “too variable” for automation, from inspections to coordination-heavy field operations.

“As these capabilities mature, deep tech becomes operational infrastructure rather than experimental technology. Businesses will trust intelligent systems to handle complex, real-world environments and deliver consistent value, even when conditions shift.”

 

Jonathan Cleave, Group MD of consultancy Intralink

 

jonathan-cleave

 

“India, Vietnam and South Korea will be the most dynamic global growth markets for UK deeptech companies in 2026.

“India offers huge rewards, given the country’s sheer scale, increasing R&D investment and pro-business government. Its $4 trillion economy is projected to multiply eightfold by 2047. And, as it produces a third of the world’s STEM graduates, has 900 million digital users and an increasingly affluent consumer base, the country is awash with opportunity. Electric mobility, cleantech, medtech and life sciences will be particularly hot sectors.

“Vietnam should also be in companies’ sights. With GDP growing at 7%, its resilience in the face of global trade disruptions has reinforced its position as another of the world’s strongest markets. Vietnam’s youthful population of more than 100 million also presents a vast labour pool and a dynamic market for tech innovations.

“In Korea, there’s major government investment planned in AI infrastructure and next-generation data centres, alongside the rollout of a national AI regulatory framework. This will create valuable openings for deeptech companies specialising in AI platforms, enterprise AI tools and cybersecurity.

“Each market has distinct business cultures, procurement processes and regulatory frameworks, which can pose a challenge. But with the right guidance, they all hold huge promise for deeptech firms in 2026.”

 

Toby Harper, Founder and CEO of Harper James

 

toby-harper

 

“Most founders don’t start with the capital or experience needed to acquire another business, and even when they do, an acquisition brings risks that can be hard to manage in the early stages and can slow momentum.

“In my experience, many founders are also driven by the opportunity to build something from the ground up that reflects their own vision. When you put all of that together, it’s no surprise that starting a new venture is often the more natural path than buying an existing one.”

 

For any questions, comments or features, please contact us directly.

techround-logo-alt

 

Phelim Bradley, Co-Founder and CEO of Prolific

 

phelim-bradley

 

1. Technical benchmarks stop mattering

Technical AI benchmarks have saturated. The gap between the top and 10th-ranked models on Chatbot Arena shrank from 11.9% to 5.4% in a year. The top two models are separated by 0.7%.

“By the end of 2026, labs will stop reporting MMLU scores in model releases. The reason is simple: benchmarks measure abstract task performance, not what actually happens when people use these systems.

“Analysis of 4 million real-world prompts shows people use AI for technical assistance, reviewing work, and generation. None of the major benchmarks measure these things. RE-Bench found AI scores 4× higher than humans on 2-hour tasks but humans outperform 2:1 on 32-hour tasks.

“The replacement will be continuous evaluation systems that measure business outcomes. Anti-contamination measures will become standard. Rotating unpublished test sets, not fixed benchmarks.

2. Autonomous agents lead to more human oversight, not less

“90% of AI agents fail within 30 days of deployment. 95% of enterprise AI pilots fail to deliver returns. Only 5.2% of enterprises have agents in production.

“2025 was the “Year of the Agent” at every conference. Lived reality is different. Apple and Amazon launched features with far fewer capabilities than promised. Gartner identified widespread “agent washing” where vendors rebrand chatbots as agents.

“By the end of 2026, most enterprise agent deployments will require human approval for any action with financial or customer impact. The companies that succeed will be those with the best human-AI collaboration frameworks.

“Progress will be measured by reliability through oversight, not autonomy. Only 62% of executives are confident in their ability to deploy AI responsibly. The 5% who succeed are building human-in-the-loop systems.”