Experts Comment: Is There A Lack of AI Expertise Among the General Workforce In the UK?

As artificial intelligence (AI) rapidly reshapes industries, there are plenty of issues that are constantly emerging as potential reasons for concern. However, most recently, businesses across the UK are confronting a new worry – it’s not so much about the direct influence of AI on business or people. Rather, it goes beyond the capabilities of AI and looks at another problem we have neglected to consider up until now

Even if AI does become super advanced (even more so than it already is), will the general workforce even be capable of using it properly? Do they have the skills and expertise that are needed to keep up? While AI adoption surges in sectors from healthcare to finance, the pace of upskilling appears uneven, raising questions about preparedness and potential barriers to innovation.

It’s an interesting question, tracing a kind of lateral move from most of concerns that seem to be arising on a daily basis regarding the progress of AI – less existential and doomsday, and more, are these innovations even going to matter if we don’t know what to do with them? Is it all a pipe dream?

If There Is a Lack of Expertise, What Does That Mean and What Can We Do About It? 

 

A lack of expertise in AI could mean many different things. Is it a lack of technical knowledge, or is the challenge more about mindset and application? And, where should responsibility lie: with employers, education systems or individuals?

The answer is, by no means, simple – it never is. Some argue the gap is overstated, pointing to the growing availability of user-friendly AI tools. Meanwhile, others warn that without targeted training and better integration of AI literacy into workplace learning, the UK risks falling behind in the global digital economy.

From practical skills shortages to cultural resistance, we’ve gathered a group of experts to unpack what’s really holding workers back and explore what potentially needs to change to build a workforce ready for the AI era.

 

Our Experts

 

  • Angie Ma: Co-founder of Faculty
  • Ankur Anand: CIO of Nash Squared
  • Daniel Acutt: CEO and Founder of Tema App
  • Aden Hopkins: CEO at XpertRule
  • Astrid Bowser: Principal Product Manager at OneAdvanced
  • Paul Taylor: Vice President of Product at Smarsh
  • Kyle Hill: CTO at ANS
  • Helena McAleer: Co-Founder of The Gen AI Academy
  • Michael Owen Hill: Legal Technology Strategist at NetDocuments
  • Tommy Ross: Head of Global Public Policy at Alteryx
  • Berend Booms: Head of EAM Insights at IFS Ultimo
  • Laurel McKenzie: Principle Behavioural Scientist at CoachHub
  • Greg Shewmaker: CEO of r.Potential
  • Adam Stott: Founder of Big Business Entrepreneurs
  • Zoe Cunningham: Director of Softwire
  • Bill Conner: CEO of Jitterbit
  • Michael Green: UK&I Managing Director at Databricks
  • Jess O’Dwyer: General Manager at Pocketalk, Europe
  • Kevin Fitzgerald: UK Managing Director at Employment Hero
  • Anton Roe: CEO of MHR

 

Angie Ma, Co-founder of Faculty

 

angie

 

As AI becomes further ingrained in our everyday lives, business leaders continue to panic around a perceived lack of AI skills amongst employees.

In fact, the phrase ‘AI skills’ is somewhat of a misnomer, and executives would do well to avoid blindly talking about it.

Remember – a tiny proportion of employees will build AI and write code. They will continue to need high levels of technical skills, for sure.

But what about the rest of us? The reality is the vast majority of employees will soon be using AI to augment – not replace – our decision-making, and improve our productivity.

So the conversation is misdirected. Leaders need to instead talk more about how humans can manage, monitor, and work alongside AI tools.  AI is just like any other technology: we need to learn how to integrate it into our lives.

The difference between a successful and unsuccessful business will be whether they are able to work in a hybrid world where humans and machines work continuously and harmoniously together.”

 

Ankur Anand, CIO of Nash Squared

 

ankur-anand

 

Our recent Digital Leadership Report uncovered the extent of the skills challenge that AI, and GenAI in particular, is creating – as we have just seen the biggest and fastest jump in AI skills scarcity of any technology skill, both globally and in the UK. Yet despite this clear challenge, technology and business leaders appear to be slow to respond: over half are not upskilling their staff in GenAI and almost half are yet to implement meaningful AI training.

Addressing the AI skills question is a challenge, but there are ways in which organisations can set about tackling the issue, both for their technology teams and for staff more broadly across the business.

Embedding AI learning into learning and development programmes is critical to ensure that employees understand the new technologies available and how to get the most out of them. This has to be a key priority for all businesses. There is no point investing in these technologies if we do not upskill those they are intended to support.

What’s often overlooked is how AI literacy can amplify itself across an organisation. When more individuals begin to understand and apply AI in their roles, it creates a ripple effect – sparking curiosity, experimentation, and peer-to-peer learning. This collective momentum not only accelerates adoption but also fosters a culture of innovation and shared intelligence that benefits the entire business.

There are other routes too – for instance, the Big Tech players and key movers in the AI industry are keen to support businesses where possible, because adoption is clearly in their best interests. For example, Microsoft has pledged to equip 1 million people in the UK with the skills needed through its Get On programme.”

 

Daniel Acutt, CEO and Founder of Tema App

 

daniel-a

 

“Most of the general workforce lacks AI expertise. It’s only recently become accessible due to exposure on social media. While AI has been around for years, it wasn’t widely used or understood. Social media brought it into focus, and as awareness grows, so does everyday usage. Still, this barely scratches the surface of what’s possible. Large organisations have been leveraging powerful AI tools for years—tools the average person isn’t even aware of.”

 

Aden Hopkins, CEO at XpertRule

 

aden-hopkins

 

 “It’s no secret that businesses are doubling down on artificial intelligence (AI) in 2025, with 75% planning to increase their investment in the technology. But much of that attention is being poured into generative AI tools better suited for content creation than business processes.

The risks of “polished dishonesty” in AI is underestimated by many businesses, especially as models become more sophisticated, persuasive and integrated into decision-making – the danger lies in how convincingly wrong these systems can be. AI today can present misleading information with confidence and coherence, unlike earlier forms of automation, which were usually obvious or mechanical. This makes misinformation hard to detect.

This misalignment highlights a broader issue: a fundamental lack of understanding around AI’s practical value in enterprise settings. Media narratives around AI often lean towards doom and disruption, fuelling fears among businesses that AI is either uncontrollable or a threat to jobs. To overcome this, companies must focus on transparency and explainability, showing how AI augments rather than replaces human expertise.

Many businesses begin with the wrong question: “What can we do with our data?” Instead, they should ask, “What decisions do we need to make and what outcomes do we want?” This decision-centric approach ensures that AI tools are purpose-built, measurable, and aligned with business goals.

An organisation that has a strong focus on quality might define a clear goal, then develop AI models to assist with decisions such as when to trigger inspections or adjust settings. The result would be more targeted, explainable AI that operators are able to trust.

Additionally, Decision Intelligence (DI) bridges Explainable AI and real-world operations by embedding human expertise into machine-driven processes at two key stages. By using human knowledge and expertise at design-time, businesses can automate decisions and then later, keeping a human ‘in the loop’ at run time, for complex calls requiring human judgment.

This approach not only builds trust within the workplace but also scales expert knowledge across teams and operations, reducing variability and boosting consistency.

The greatest risk is in high stakes and regulated businesses and mission critical tasks.  This is where failure is not an option and transparency, auditability and explainability is more than a requirement, it’s imperative.  Without it, businesses risk commercial and reputational damage, not to mention fines.

Finally, for AI to become more than a tech experiment, businesses must prioritise trust. A decision-centric model provides a clear, transparent and scalable path, one where AI becomes a trusted partner, not a tick box or worse, a black box.”

 

Astrid Bowser, Principal Product Manager at OneAdvanced

 

astrid-bowser

 

“Our workforce is our greatest asset. Despite the progress made in enabling organisations to harness this revolutionary technology, there remains a significant AI expertise gap across the general workforce. This is one of the biggest blind spots organisations face today. Beyond focusing on the safe deployment of AI tools, businesses must ensure that employees, regardless of their role or position, have a baseline understanding of how AI works, the risks it introduces, and how to use it responsibly.
Without this, organisations risk the misuse of AI, missed opportunities where AI could truly benefit the organisation, and a lost chance for employees to upskill themselves for the present and future. As AI systems become more integrated into business operations and decision-making, it is critical to have more than just technical specialists guiding the way.”

Paul Taylor, Vice President of Product at Smarsh

 

paul-taylor

 

“AI adoption amongst employees has rapidly accelerated, with employees embracing these tools to boost productivity, with Smarsh research showing that over a third (37%) of employees in the financial services sector alone are frequently using AI tools in their daily work.

But with innovation comes responsibility. Firms must establish the right guardrails to prevent data leaks and misconduct, which includes AI training and effective monitoring of outputs, as 70% in financial services said they would feel more confident if AI outputs were captured and monitored for transparency. Equally, many employees aren’t clear about how it’s being used, with 29% worrying that they don’t know where potentially sensitive information is going when AI Agents are used.

Without actively taking these steps, firms are sleepwalking into a compliance nightmare. Businesses are effectively feeding their data crown jewels into a black box which they do not own, where the data can’t be deleted and the logic can’t be explained. Employees are on board with a safe, compliant AI environment that builds trust and unlocks long-term growth, but it’s up to businesses to facilitate an environment focused on transparency and continuous learning.”

 

Kyle Hill, CTO at ANS

 

kyle-hill

 

“​​As AI continues to evolve, there must be an increased focus on the workforce’s readiness to use it effectively. The workforce must move beyond seeing AI as a personal productivity tool, instead viewing it as something that can drive wider business transformation. The real power of AI comes when it’s embedded across teams, not siloed within IT.

With 27% of UK firms citing a lack of expertise as a barrier to adoption, it’s clear the general workforce is not yet fully prepared or trained. This holds back innovation and risks misuse or underuse of AI tools.

Bridging this skills gap requires investment in workforce training, not just new technology. We’re seeing Government programmes like TechFirst already being introduced to bolster AI training, but businesses must take responsibility for upskilling employees across all functions. Only then can we unlock AI’s full value and ensure it’s used effectively and responsibly.”

 

Helena McAleer, Co-Founder of The Gen AI Academy

 

helena

 

“There’s a skills gap – universities haven’t kept up with technological change, but it’s achievable. The workforce is experimenting with AI tools, often unofficially, because they recognise the potential to achieve more, faster. The hunger is there; the guidance isn’t.

The challenge isn’t that people don’t want to learn AI—it’s that training focuses on tools rather than thinking. You can teach someone to use ChatGPT in an hour, but teaching them to brief it effectively, evaluate outputs critically, and integrate it strategically takes proper development. Technology changes fast but required skills have concrete foundations.

Companies need to build capability to surface real value from AI, treating adoption like any skill development: with structure, expert guidance, and investment in people.

Good AI training transforms how people approach problems. We’re seeing teams become more creative, strategic, and confident. The question isn’t whether organisations should invest in AI – but how!”

 

Michael Owen Hill, Legal Technology Strategist at NetDocuments

 

michae;-own-hill

 

“Taking the legal industry as an example, AI skills gaps will naturally occur across the vast majority of law firms. Employees have varying levels of knowledge and understanding of how AI works, what tasks they should be using it for, and the legal implications of using it. They won’t automatically have a deep knowledge of how it works and what it is potentially capable of, and potential risks it could pose.

In response, broad-brush ‘upskilling’ – requiring every lawyer and support staff member to become a ‘prompt engineer’ is impractical. Meanwhile, if firms introduce AI solutions that require deep technical knowledge to use, skills gaps could become increasingly problematic. Instead, they need to make it easy for both existing employees and the next generation of lawyers to work with AI responsibly and effectively.”

 

Tommy Ross, Head of Global Public Policy at Alteryx

 

tommy-rossv

“While Generative AI is producing transformative results for businesses, unlocking productivity and recasting job roles to be more strategic and rewarding, many enterprises still lack effective approaches to developing AI literacy.
High education costs, misaligned incentives, and a lack of proven learning tools are major barriers. The EU AI Act has brought this issue into sharper focus, requiring businesses to ensure staff have sufficient AI expertise. This includes foundational data literacy and soft skills like creativity, critical thinking, and collaboration. Enterprises must adopt layered, tailored training programmes and provide hands-on, accessible tools like low and no-code platforms.
Prioritising AI literacy is much more than a compliance exercise. It should be a wake-up call. AI literacy democratises working with data and analytics for a wider set of workers. This is an aspect of AI success that can’t be overlooked, and business and IT leaders need to consider what improving AI expertise looks like for their own organisation”.

Berend Booms, Head of EAM Insights at IFS Ultimo

 

 

“The workforce is undertrained for effective AI utilization, creating a critical implementation gap. This is acute in industrial sectors like asset maintenance, where aging workforces and skills shortages already strain operations. Engineers now work alongside AI-driven digital co-workers yet lack the foundational knowledge and/or cultural understanding to leverage these tools fully.

This disparity means organizations with AI adverse users risk underutilizing its productivity and decision-making potential – struggling to interact with or trust AI outputs. In sectors already facing demographic pressures and skill shortages, this growing gap undermines the opportunity for meaningful and inclusive workforce augmentation.

In enterprise asset management (EAM), the solution lies in intuitive platforms that promote AI literacy through ease of use. By embedding AI capabilities directly into everyday workflows, users naturally build familiarity and confidence with AI technology. This approach helps close the skills gap at the point of need and ensures users gain competence organically, accelerating AI’s time to value across the organization.”

 

Laurel McKenzie, Principle Behavioural Scientist at CoachHub

 

laurel-mckenzie

 

“Across industries, organisations are increasingly relying on AI for day to day operations. This trend is unsurprising, as sectors where employees readily adopt AI experience a staggering 4.8 times greater productivity growth compared to sectors resistant to the technology.
However, simply offering employees access to AI tools is not sufficient to unlock its potential; employees must be adequately trained. This is especially important for HR teams, who will play a key role in evangelising the technology’s adoption and deploying relevant training opportunities. As such, HR professionals must urgently assess their AI readiness.
As companies begin to deploy AI, there are a range of elements to consider, such as the use of sensitive personnel data in training, which could affect user privacy and trust, and the risk of biased outputs from generative AI tools. HR teams must provide guidance and clear communication, ensuring employees know the IT team is controlling AI use and that privacy is paramount, through meetings, announcements, accessible AI policies, and Q&A sessions. HR leaders could work with coaches to improve communication and prepare for tough questions, and this is also a good time to train employees on AI tools. By prioritising their own AI literacy through training, coaching, and ethical practices, HR professionals can guide employees through the transition and champion responsible AI integration.”

Greg Shewmaker, CEO of r.Potential

 

greg-shewmaker

 

“There’s a growing belief in some executive circles that AI can just replace people. The path forward isn’t about choosing between being all-in on AI or avoiding it entirely, it’s about finding the right balance. When we use AI to support and elevate the work our people do best, we start seeing real, lasting impact.

That means we need to figure out how to use AI to support our people, align with our goals, and reflect on how our business operates. Above all, it requires thoughtful, evidence-based planning instead of assumptions or hype.

We need to be able to simulate, analyse, and optimise how AI and human labour fit together—before we make real-world decisions. Tools like these can help us determine how to make the most of our human talent, using digital twins, real-time labour data, and predictive modelling to identify where AI can support rather than replace people.”

 

Adam Stott, Founder of Big Business Entrepreneurs

 

adam-scott

 

AI is moving fast, and most businesses are being left behind because their teams simply don’t know how to use it properly. Tools like ChatGPT and Jasper are game-changers, but only if you know how to use them. Give someone a Formula 1 car when they’ve only ever ridden a bike… they’re not going far.
At Big Business Entrepreneurs, we’ve invested heavily in AI training and developed our own custom “AI Adam” model for staff and clients to use. This isn’t just about saving time; it’s about producing better marketing, stronger copy, faster solutions, and more creative ideas.
The businesses that embrace AI and actually teach their teams how to use it will gain a serious edge. Ignore it, and you’ll fall behind. Lead from the front, learn it yourself, and watch the results speak for themselves. Those who move fastest will win.

Zoe Cunningham, Director of Softwire

 

zoe-cunningham

 

“There are two main risks with a lack of AI knowledge amongst the general workforce. The first is that many people are missing small (or even large) improvements that could make their work lives more efficient and more convenient for themselves. However, this is true of many previous technological developments – in general when people are not directly involved with technology, they can be resistant to changing the ways that they have always worked.

Often, new improved ways of working only really take hold as the next generation enter the workforce and brings new ideas and openness to learning. A much bigger risk, especially with AI, is that untrained employees are using it unsafely, inefficiently or in ways that lead to inaccurate or incorrect output. Key tips to rectify these two issues include sharing enthusiasm about new tech, and case studies for when it has worked, and encouraging peer groups so that employees can learn from each other.”

 

Bill Conner, CEO of Jitterbit

 

bill-connor

 

“A generational difference exists in AI adoption. While some younger workers, or ‘digital natives’, readily embrace AI tools, other more experienced employees may require additional support to feel comfortable using them. Companies can bridge this gap by focusing on business-user processes and use cases, which will help empower employees with appropriate AI tools and experiences.

Businesses often get caught up in the pursuit of AI, seeing it as a ‘silver bullet’ that promises maximum gains in time, money, and productivity. However, prioritisation is key. Legacy data stored in siloed applications and integration issues can significantly hinder progress. Instead of going ‘all in’ on flashy AI solutions, companies should prioritise integration of existing systems and utilise AI strategically where it adds the most value. This approach mitigates risks and ensures a smoother transition to a more efficient work environment.

The headlines tend to focus on big acquisitions and bold AI promises. But real impact depends on something far less glamorous: how well organizations can layer intelligence into their existing systems, applications, workflows, and data. That’s where true AI infusion happens; not at the point of purchase, but at the point of integration. It’s a space where experience matters, and where platforms built for layered, scalable intelligence can quietly outperform the big names pursuing their next big idea.”

There’s a lot of talk in the industry about strategic alignment and unlocking AI value, which sounds great on paper. Of course, the real challenge isn’t in acquiring data infrastructure, it’s in actually using it. Integration, execution, and measurable impact have historically been the sticking points. This time, the focus should be on delivery for customers, not just narrative.”

Michael Green, UK&I Managing Director at Databricks

 

michael-green

 

“AI tools are only as effective as the people trained to use them. A lack of AI literacy within organisations remains one of the biggest barriers to successful deployment. To ensure a smooth transition, businesses should take a structured approach to AI training, aligning upskilling with business goals. This means taking ownership of internal AI education and integrating continuous learning programmes to ensure employees feel equipped to engage with new processes.

Recruiting specialist AI talent is another major challenge. Without this internal expertise, businesses often rely on generic third-party solutions that may not align with their unique operational needs. To address this, organisations should prioritise recruiting AI specialists with both technical and industry-specific knowledge, while also upskilling existing employees to create a workforce capable of working alongside AI systems.

Investing in home-grown AI applications can also provide long-term advantages. When developed in-house, preferably within a unified data platform, AI tools and agents can be customised to meet specific business challenges and build institutional AI knowledge. Businesses that develop in-house AI expertise will be better positioned to adapt AI to their unique needs rather than relying on off-the-shelf solutions that may not fully align with their operational goals.”

 

Jess O’Dwyer, General Manager at Pocketalk, Europe

 

jess-o-dwyer

 

“We feel that AI is still a victim of lack of awareness around safety and impact. We need to raise awareness and educate people that AI can be just a part of the process and progress. Attitudes need to evolve to harness AI’s ability for greater, more efficient outcomes, whilst remaining safe and secure. Businesses need to ensure that their products and services respect the regulations while offering new, innovative and reliable solutions that will enhance and work well with other providers.”

 

Kevin Fitzgerald, UK Managing Director at Employment Hero

 

kevin-fg

 

“We recently conducted research into workplace AI usage and found that there is an “AI advantage gap”. While nearly three-quarters (73%) of senior managers are using AI monthly, this number drops to just 32% for entry-level employees. Despite popular claims that Gen Z are leading the AI charge, it’s millennials who are actually the heaviest users.”
This workplace divide is creating an uneven playing field with AI tools largely concentrated at the top. Leaders are setting the pace, but many workers lack guidance or access to tools relevant to their roles. We know that employees who feel excluded from AI initiatives report a 50% decline in their productivity, which is why we are calling for greater support and better training to tackle the growing imbalance.”
This means investing not just in access to tools, but in a more human-centred approach to AI adoption, ensuring training, support and access to tools are available at every level, not just at the top.”

Anton Roe, CEO of MHR

 

anton-roe

 

“While investment in AI tools has soared, the implementation stage often doesn’t consider the people who are meant to use the technology. This leads to low engagement, scattered use, and wasted potential. AI can’t just be handed down from leadership; it needs to be shaped by those closest to the work. That means investing in practical training, supporting experimentation, and building confidence through real use cases. Crucially, organisations need to treat AI less like a standalone initiative and more like a shift in how work gets done. Without that mindset, we risk embedding tools that aren’t fit for purpose, or worse, aren’t used at all. As the technology advances, the real differentiator will be how quickly teams, not just leaders, are able to make it part of their everyday work.”