Artificial intelligence is often presented as a self-learning marvel – that is, a technology that can write, reason and adapt almost independently. But, behind the slick demos and polished product launches lies an unavoidable truth: AI still depends on armies of human workers.
That, in itself, isn’t a problem. In fact, in some ways, it’s been reassuring. However, in a cruel twist, it seems as though many of those workers are training systems that could one day render their own labour obsolete.
That paradox was thrown into sharp relief this month when hundreds of contractors employed by GlobalLogic, a staffing firm hired by Google, were laid off. Their primary responsibility was helping refine Gemini, Google’s flagship AI model, and its AI Overviews, which now appear at the top of search results.
These contractors were responsible for reviewing and tweaking responses to ensure the AI sounded natural and “human-like”. Essentially, they were helping to add a human edge to what AI was generating, reinforcing a common argument that AI still can’t operate completelyetely independently (without the help of humans). But, by doing this, they’re also teaching these AI models how to become more human, essentially, becoming the human scaffolding propping up a technology designed to do their job better than them.
The Hidden Workforce Behind AI
The GlobalLogic episode isn’t unique. Big tech companies from Silicon Valley to Shenzhen rely on a sprawling network of human trainers, moderators and testers who label data, evaluate AI outputs and simulate conversations. These tasks are essential for the development of AI: without human feedback, even the most advanced AI tends to produce robotic or nonsensical responses.
However, these workers are rarely in the spotlight. They are often contractors rather than employees, meaning they have limited job security, and sadly, they’re often treated differently to higher-level, permanent employees. The GlobalLogic layoffs highlight just how precarious this kind of work can be. Indeed, once the model is “good enough”, the people who trained it are no longer needed in the same volume.
For businesses in the UK watching this unfold, there’s a pretty lesson: the AI boom is built on human labour, much of it invisible. Startups developing their own AI tools may be tempted to outsource training to low-cost contractors, but that raises questions of ethics, quality and sustainability. The questions are, who bears the cost of making machines intelligent, and who benefits once those machines reduce demand for human effort?
More from Artificial Intelligence
- Fiverr Latest Company To Cut Workforce In Place Of AI
- AI Influencers Find New Success In Six-Figure Lifestyle Niches
- ChatGPT Takes Over the Commons: Data Shows an Increase in MP Speeches Written By AI
- If AI Summarises Everything, Why Will Anyone Read the News?
- How Would A Possible OpenAI Chip Impact The Semiconductor Industry?
- Salesforce Layoffs: What Patterns Are We Seeing In Big Layoffs Tied To Automation?
- The Evolution Of Face Yoga: How AI Moves Wellness Beyond Gimmicks
- What Happens When ChatGPT Goes Down?
A Warning for the Future of Work
But there’s a deeper, more philosophical dilemma here. If workers are training systems that eventually take over their roles, are they complicit in accelerating their own redundancy? Or, are they simply caught in an unavoidable economic cycle that happens to, most likely, end in their professional demise?
We’ve seen similar dynamics before. Factory workers trained early industrial robots, only to be replaced by them years later. Call centre staff have long been asked to help refine automated voice assistants, even as those assistants shrink headcount. Now, with AI models like Gemini, ChatGPT and Anthropic’s Claude (among so many others), the scope is far wider. We’re not just talking about replacing manual tasks, but intellectual and creative ones too, and perhaps that’s the scariest prospect.
The Catch-22 is stark – without humans, AI cannot improve. But as AI improves, the humans who helped it grow may lose their livelihoods. For British tech employees, especially those in roles involving customer support, content moderation or data analysis, the message is sobering. That is, AI won’t replace every job, but it will almost certainly reshape them.
What Does This Mean for UK Businesses and Startups?
For UK businesses, especially startups in the AI or edtech sectors, the GlobalLogic story carries two key lessons.
First, AI isn’t autonomous. It needs human context, oversight and correction. Any company building AI products should be honest about that and transparent about who provides it. Glossing over the human input risks reputational damage, as customers and regulators demand more ethical AI supply chains.
Second, businesses urgently need to prepare for the social impact of the tools they deploy. If AI tutors can mark assignments, what does that mean for teaching assistants? If AI agents can handle customer queries, how do companies retrain and redeploy staff rather than simply cutting them loose?
Companies that plan for this transition responsibly – investing in reskilling, redeployment or creating new types of human-in-the-loop jobs – will, undoubtedly, be better placed to earn trust and build long-term resilience.
Building Without Burning Bridges
The GlobalLogic contractors who helped Gemini sound more human may not have expected their work to end so abruptly. But, their story is a reminder that AI, for all its futuristic promise, is still a deeply human technology – trained by people, refined by people and often at the expense of people. And it’s important to take note of this, whether those people are highly skilled software engineers, for instance, or simply contract workers doing the grunt work, so to speak.
For the UK’s tech community, the challenge is clear. AI obviously isn’t going away, so it’s going to come down to what we do with it. Ultimately, how we integrate it will shape the workforce of tomorrow. Whether we treat human trainers as disposable, or as essential partners in building trustworthy systems; whether we allow AI to hollow out jobs, or we find new ways for technology to augment rather than replace?
The answer will determine whether AI becomes a tool for shared progress or yet another wave of disruption that leaves too many people behind.