What was once illusive is becoming somewhat inescapable.
AI used to be an incredible concept, the very thought of which was challenging to grasp but inexplicably exciting to consider. It created opportunity where there was none and opened doors to a future few ever dreamed they would live to see. AI was simultaneously exciting and scary, and for all intents and purposes, it still is.
What’s changed, however, is the fact that it’s no longer a figment of our excited imaginations and anxious longing; a mere component of complex, advanced technology far out of the reach of ordinary people. Now, it’s everywhere – whether we like it or not. An unavoidable feature of every app, an addendum to document, a division at every tech company and a hot topic in every conversation.
At least, that’s been my experience, and I hardly think I’m alone in feeling this way.
So, what’s the problem? I’m not against AI nor am I participating in AI fear mongering that crowds online chatrooms and floods Baby Boomers’ WhatsApp chats. I use it all the time and I encourage others to do the same.
But, AI does make me nervous. Just not for the reasons you may think.
AI Dooms Day Anxiety: Legitimate Fear or Tired Trope?
The whole “AI apocolypse” trope is very dystopian in nature, latching onto tired old technology-related panic from years past combined with sci-fi-esque “end of the world” imagery. And sure, these things certainly do make one stop and think – they make us wonder what we’re creating and how much control we’ll have over it.
Of course, the primary fear on the minds of AI laymen (and by that, I mean non-AI-experts) is Artificial General Intelligence (AGI) and the development of consciousness. This concern is promptly followed by visions of angry robots turning on humans and the end of the world as we know it – sure, we’ve all seen the movies.
However, I believe this fear is misplaced. I’m not saying we should totally neglect these concerns – please don’t call me when the robots turn on us, I’ve been saying “please” and “thank you” to ChatGPT. Rather, I think the dystopian future we all picture so vivdly is not our most immediate concern (if one at all). But I’m not saying don’t worry – I’m saying we need to worry about something else.
More from Artificial Intelligence
- Novo Nordisk Went All-In On OpenAI – Is Big Pharma About To Eat HealthTech’s Lunch?
- Hotspring Develops Leading Hybrid AI And Manual Workflows For Roto And Unveils Brand New 2.0 Interface
- What Do AI Experts Think About Claude Mythos?
- Experts Comment: The EU AI Act Comes Into Force This August – Will It Help Or Hinder European Startups?
- Chinese Scientists Call For Global AI Governance – What Would This Mean For Tech Startups Around The World?
- Is OpenAI Moving Into The Cyber Defence World Next?
- Experts Comment: AI Detectors Are Doing More Harm Than Good – Here Is How Education Should Actually Respond
- Why Are So Many Mega Influencers Creating AI Clones To Replace Them?
What If the Real Risk Is The One That Looks Harmless?
Sometimes, the real threat, or the most significant threat, is the one we don’t see coming. And yes, I’m well aware that I sound like Obi-Wan handing off an earnest piece of advice in a deleted scene from “Star Wars: The Phantom Menace”.
But I stand by the sentiment. While the world chatters about AGI, GPUs and compute, rushing to create and implement national and even global AI regulation in record time – all the big and most important issues and concerns – we’re letting the seemingly smaller issues slip through the cracks. And the problem is, they may seem like small issues compared to things like data sovereignty and mass redundancies, but these “small” issues could have serious implications.
They already are. They may not be bold and obvious just yet, we may not be seeing the effects everywhere we go, but the consequences are beginning to crop up increasingly often.
An “Assumption Failure, Playing Out at Civilisational Scale”
Much like the problem may seem “small”, it’s also quite simple. It’s about implicit bias and disproportionate representation, and the effects that these things can and will (and already are having) on society. Of course, this argument could and should be made for demographics and representation in all contexts, but for now, I’m focusing on gender – because:
- There is a decent amount of research that’s already been done on the topic.
- It offers a case study, an example, to make a broader argument.
So, I’ll put it quite simply. The issue at hand, the one I’ve alluded to and frustratingly danced around for the better part of 500 words, is this: women are disproportionately involved in artificial intelligence in comparison to men.
The phrase “involved in AI” is, agreeably, awkward and vague, but bear with me – there’s good reason. Indeed, when I say “involved in”, I’m talking about many things, but most of all, women who are actively working on developing actual AI technology (the complicated stuff), women working “in” AI (the AI industry, shall we say) and actually using AI technology themselves.
According to a report published by Forbes in April 2025, women both adopt and actively use AI tools far less than men who formed part of the study. The Financial Times revealed in a June 2025 article that a Danish study involving 100,000 workers showed that women are 20 percentage points less likely than men to make use of common AI tools like ChatGPT. LeanIn exposed that according to another study, women are not only less likely to use AI both at work and at home, but they also feel more anxiety around using AI tools, they’re less likely to be encouraged to use AI and overall, the feel significantly less positive about AI in general.
There have been countless studies conducted across time, regions and age groups, and they all seem to tell us the same thing. Men use AI more than women.
And this is only AI use. As you may have already guessed, the industry itself reflects the same pattern. There are significantly more men than women hired to fill AI-related professional roles.
So what does this mean?
Well, first and foremost, women are obviously underrepresented, and there are plenty of reasons for this – the types of jobs women do compared to men (ie. the continuation out of traditional gender norms), general gender perceptions, access and so much more. Honestly, this is a massive debate in itself, and I’m going to bypass it not because it’s unimportant, but becasue I think that it often becomes a roadblock that prevents us from reaching the next issue. And the next issue, the one I’m focused on, is what this underrepresentation of women actually means for us – “us” being society as we know it.
Because when I started considering this issue and looking into it, I became alarmed quite quickly. And I’m sorry to say it, but further research and discussion with experts has escalated that concern exponentially. Inequality and gender issues are already massive problems we face today, and it seems as though we me may, unknowingly, be exacerbating the problem in epic proportions.
I don’t want to be dramatic, but Sayali Patil, an AI Systems Architect at Cisco, has described the situation as an “assumption failure” that is currently “playing out at civilisational scale”. And if that, coming from a technical AI expert of epic proportions, doesn’t concern you, it really should.
The Implications of Female Underrepresentation In AI
The problem we’re facing is that if more men than women are building AI systems, shaping datasets and using these tools daily, then we have to ask a simple question – can AI really serve everyone equally?
After all, AI learns patterns. It learns what “good” looks like, what “strong” sounds like and what “confidence” reads like. However, those patterns don’t just appear out of nowhere – they come from data, and that data is shaped by human behaviour. It’s data that we feed it, data that comes from humans and is selected by humans, and if that behaviour is disproportionately male, then the definition of “normal” risks becoming male too.
I’m not saying that AI will intentionally favour men – at least not in general terms – and that’s actually what makes the problem more concerning. The bias is rarely deliberate; it’s subtle, it’s structural and often, it’s invisible.
Syed Asif Ali shared with me a striking example while testing an AI hiring tool in Dubai. He noticed the system repeatedly downgraded some female candidates, despite strong skills and relevant experience. The only noticeable difference, after curious analysis, was tone. That is, candidates wrote in a less aggressive, less self-promotional style.
“When we looked closer, it made sense,” he explained. “The model had been trained on data where that more direct style was treated as a signal of confidence. So anything outside that just… looked weaker to it.”
Technically, nothing was broken. The system wasn’t programmed to discriminate, yet, the outcome still skewed in one direction. As Ali put it, “bias in AI isn’t always loud or obvious. Sometimes it’s just one style quietly becoming the default, and everything else getting pushed down without anyone noticing.”
And I think that this example highlights the risk I’m raising better than I could explain, because it’s real. AI doesn’t just reflect data; it standardises it. Once a particular communication style, career path, behaviour pattern or language becomes associated with success (in this example), the system begins reinforcing it. And over time, that pattern scales.
This is where representation becomes critical. If the people building AI systems come from similar backgrounds, then the definition of what looks “right” becomes narrower. Ali warns that “if the people building these systems all come from similar backgrounds, the definition of ‘normal’ gets very narrow. And once that’s baked into the system, it scales fast.”
The challenge is that we may not even know what’s causing the bias. It might not be obvious, such as variables like gender labels. It could be tone, phrasing, career gaps, communication style or behavioural patterns that correlate with one group more than another in ways that we haven’t necessarily noticed yet. These signals are subtle, and once embedded in large datasets, they become extremely difficult to isolate.
Indeed, once we create this problem, it becomes harder to fix. If we can’t clearly identify the bias, we can’t easily remove it, and if AI systems are deployed at scale before we address it, those patterns risk becoming normalised across hiring, performance reviews, education, finance and beyond.
The Gender Gap In AI Could Shape Society
This is why the gender gap in AI matters – it’s not just about fairness in the workforce (albeit really important), it’s about shaping the technology that will increasingly shape society as a whole. If women are underrepresented in building and using AI, then their experiences, behaviours and communication styles may be underrepresented in the data that trains it.
The result isn’t necessarily overt discrimination; it’s most likely going to be something quieter and potentially more unintentionally sinister. It’ll be systems that subtly favour one way of working, tools that reward one tone over another or models that learn from patterns that don’t fully reflect everyone.
And because AI operates at scale, those small biases don’t stay small. They grow, and they grow quickly.
This is the uncomfortable reality in which we find ourselves. The problem isn’t that AI will intentionally exclude women – rather, it’s that it may unintentionally optimise around male-dominated patterns simply because those patterns appear more often in the data. And once that happens, the technology we rely on every day could quietly reinforce the very inequalities we hoped it would help solve. The inequalities we’ve been working to mitigate and challenge for centuries.
If AI is going to shape the future, then representation in building it matters – not just for fairness, but for accuracy. Because when half the population is underrepresented, the system isn’t just biased, it’s incomplete.
Our Experts:
I spoke to a group of experts on the topic:
- Sarah Hoffman: Director of AI Thought Leadership at AlphaSense
- Jenny Briant: Director of Talent Strategy at Ten10
- Maria Nugroho: AI Enterprise Strategist
- Ana-Maria Badulescu: VP of AI Labs at Precisely
- Charlotte Wilson: Head of Enterprise Business at Check Point Software
- Sayali Patil: AI Systems Architect at Cisco
- Syed Asif Ali: Founder and Digital Identity Architect at Point Media
- Michael Ferrara: Technology Contributor and Legal IT Practitioner at Conceptual Technology
- Edward Tian: CEO of GPTZero
- Emma Irwin: Director of Sales Engineering at Dataiku
Here’s what they had to say.
Sarah Hoffman, Director of AI Thought Leadership at AlphaSense
![]()
“AI has the potential to move us forward in extraordinary ways. But even as systems become more autonomous, they are still shaped by human decisions. If those perspectives are too narrow, the outcomes will be too. We’ve already seen how bias can surface in areas like hiring and healthcare.
“AI is quickly becoming foundational infrastructure for how work is done and how decisions are made. If AI is going to positively impact the future of work for women, then women across cultures and communities need to help shape it. Without women’s contributions, we risk carrying yesterday’s assumptions into tomorrow’s digital infrastructure.”
Jenny Briant, Director of Talent Strategy at Ten10
![]()
“AI is not being introduced into a neutral environment, and that’s where the risk lies. Recent data from the International Labour Organisation shows that female-dominated roles are almost twice as likely to be exposed to generative AI than male-dominated ones, which means the people most affected are not always the ones influencing how these tools are designed, tested or governed.
“These systems are built on past decisions and behaviours. If those reflect uneven access to opportunities or progression, AI can end up reinforcing those same patterns in how people are assessed, hired or supported at work.
“The issue is not the technology itself, but who is shaping it. If women are underrepresented in the teams designing, testing and governing AI, their perspectives are missing from critical decisions.
“If we want AI to close gaps rather than widen them, we need diverse representation, strong governance and consistent human oversight built in from the start.”
Maria Nugroho, AI Enterprise Strategist
![]()
“This isn’t just a diversity issue; it’s a technology and commercial quality issue, and we need to say that more loudly.
Only 22% of the global AI workforce is female (WEF, 2025). Just 14% of AI research papers have a female first author (Stanford AI Index, 2025). These aren’t abstract statistics; they represent who decides which problems AI solves, which datasets get used, and whose experiences get encoded into systems shaping enterprise decisions worldwide.
“The commercial consequence is measurable: AI products built by gender-diverse teams show 15% fewer bias-related errors (McKinsey, 2024). Homogeneous teams produce homogeneous outputs. When AI misrepresents half the population, organisations deploying it inherit that blind spot and pay for it in failed adoption and missed value.
“We are not just building technology. We are building the infrastructure of future economies. If women aren’t in the room, we aren’t building for the full market and no amount of post-deployment patching fixes a foundation built without us.”
Ana-Maria Badulescu, VP of AI Labs at Precisely
![]()
“The future of AI and data innovation is dependent on the diversity of the perspectives shaping it. Now that AI is increasingly embedded across every sector, we have a responsibility to make AI truly reflective of the society it serves. We must act now if we want AI to be a force for equity and innovation, rather than exclusion, and that begins with diversity amongst those who work on the technology itself.
“When we create space for more women — and for people of all backgrounds and experiences — we build technology that is stronger, more creative, more equitable, and ultimately more impactful.”
Charlotte Wilson, Head of Enterprise business at Check Point Software
![]()
“This is not a new problem, and it is a very real one. We know that diversity drives growth and profit, yet 95-97% of developers in the AI space are men. That means unconscious bias is being baked into the technology from the ground up. It’s not intentional, but it is happening. We only need to look at the data. The lack of women, and particularly senior women, in tech costs the UK economy somewhere between £2.5 and £3 billion per year. We still have an 88% gender pay gap in the UK, meaning women are earning less, and yet when it comes to accessing AI education, upskilling courses and tools, the cost is exactly the same. There is no dispensation, no adjustment made to try to rebalance that dynamic. That sends a message.
“The danger here is significant. As reports like those from the LSE and others have shown, when AI is deployed with unintentional bias embedded in its outputs, it doesn’t just disadvantage women, it actively harms society. And with the current political backlash against DEI initiatives, and some of the more populist movements pushing hard away from a diversity agenda, this is probably going to get worse before it gets better. The cost of inaction won’t just be felt in boardrooms. It will also show up in taxation, in public services, and in the broader burden on society. We cannot afford to be so dazzled by what AI can do that we ignore who it’s being built for.”
Sayali Patil, AI Systems Architect at Cisco
![]()
“I have spent several years building the kind of enterprise AI infrastructure that quietly shapes how millions of people experience technology every day. And what I can tell you from that vantage point is this: the gender gap in AI is not a cultural problem waiting for a cultural solution. It is an engineering problem already embedded in production systems, and it is getting harder to fix with every model that ships.
“Here is what that looks like in practice.
“Intent classification, the core mechanism that determines how an AI system understands what a person is asking, is shaped by the data it is trained on and the assumptions of the people who designed it. When those people are overwhelmingly male, the system learns to recognize and prioritize patterns of communication that reflect male experience. Not because anyone decided to discriminate. Because nobody in the room knew what was missing.
“That gap does not show up in a product demo. It shows up six months after deployment, in support tickets, in satisfaction scores, in the quiet frustration of users who feel like the system never quite understands them. By that point the model is already in production. The assumptions are already downstream. Rolling them back is not a checkbox exercise. It is an architectural undertaking.
“I hold a USPTO granted patent in intent based chaos engineering (US12242370B2), which is fundamentally about how AI systems behave when their assumptions about the world do not match reality. The gender gap in AI development is exactly that kind of assumption failure, playing out at civilisational scale.
“The models being trained today are not products. They are infrastructure. Infrastructure lasts decades. And we are building it right now, in rooms where one perspective is dramatically overrepresented, at the exact moment when course correction is still possible.
“That is the story worth telling.”
Syed Asif Ali, Founder and Digital Identity Architect at Point Media
![]()
“I ran into this last year while testing an AI hiring tool in dubai. It kept downgrading some female candidates and I couldn’t figure out why. Skills were solid. Experience was fine. The only difference was how they wrote — less aggressive, less “I did this, I led that” kind of tone.
“When we looked closer, it made sense. The model had been trained on data where that more direct style was treated as a signal of confidence. So anything outside that just… looked weaker to it.
“Nothing was technically wrong. But the output still felt off.”
“That’s the part people miss. Bias in AI isn’t always loud or obvious. Sometimes it’s just one style quietly becoming the default, and everything else getting pushed down without anyone noticing.
“If the people building these systems all come from similar backgrounds, the definition of “normal” gets very narrow. And once that’s baked into the system, it scales fast.”
Michael Ferrara, Technology Contributer and Legal IT Practitioner at Conceptual Technology
![]()
“AI is becoming part of everyday life, so it matters who helps create it. Right now, many AI teams are still led mostly by men. That can create problems, even when no one means harm. People usually build products based on what they know, and they may miss challenges others face.
“We have seen examples of this before. Some AI image tools once showed mostly men when asked for pictures of business leaders or entrepreneurs. Now some tools seem to push harder for balance by showing more women in top jobs. That is interesting, but changing pictures alone does not solve the bigger issue.
“Real progress happens when we see more women that are involved in building the technology itself. That includes writing code, testing products, leading teams, and making final decisions. Different backgrounds bring different ideas and help spot mistakes sooner.
“If AI can help shape jobs, schools, healthcare, and business, then it should be built with many voices at the table, not only a few.”
Edward Tian, CEO of GPTZero
![]()
“From my experience developing GPTZero, I have learned that AI bias appears in many inconspicuous forms. It includes what the model considers “normal,” the types of items flagged for anomaly detection, and those that are omitted from consideration. If mostly male data sources are used to create training data or products, then it is likely that women will be impacted by AI in a different way than men. This is particularly frequent in an employment context (e.g., hiring, evaluating performance, and moderation).
“From our detection research, we also see examples where models operate differently based on writing style, tone, and method of communicating. This is important because language is created by societal and gender norms, so if the design and testing personnel represent fewer than complete demographic diversity, then they will probably not see the failure modes until they have caused widespread damage as more people use them.
“The answer to this problem is not only “more women in AI,” but is also the development of stronger evaluation and testing practices that account for disparate impact based upon demographic groupings and real world contexts.
“The key takeaway is that AI will reflect its creators unless there is pressure to make them reflect all groups of people it serves.”
Emma Irwin, Director of Sales Engineering at Dataiku
![]()
“To guarantee AI success at scale, in a way that is trusted and mitigates bias, we must first combat the issue of female representation in AI. If AI models are shaped by the viewpoints of the engineers that build them, how can we avoid bias when an AI engineering team is made up entirely of men? And if that is the case, what can we do to ensure the decisions they shape are representative of the female half of the population?
“The data that AI models are trained on must be representative of a range of diverse demographics to avoid in-built bias – including a balance of female contributions. Having more varied perspectives shaping AI will improve the outputs from both an equality standpoint as well as the quality of AI outputs overall. Embedding inclusion in the AI development lifecycle will also reduce blind spots and help create AI that is more credible and widely accepted.
“Ultimately, not only will having more female voices present drive more inclusive and impactful innovation, but it will also improve the quality of AI models overall, while strengthening trust, credibility and creativity.”
Faye Ellis, Principal Training Architect, Pluralsight
![]()
“Historically, women are overrepresented in roles like coordination, documentation and support – exactly the tasks which AI is most likely to automate or devalue. Unless roles are intentionally redesigned, women face higher risks of redundancies than men do.
“This reflects the fact that AI – largely developed by men – cannot effectively serve the needs of both men and women in the workplace. Roles must be redesigned so that women can move into work involving judgement, oversight, decision-making work.
“Women must also be present for decisions across the entire AI lifecycle, including product, architecture, governance and procurement.
“Women are less likely to have time to learn outside of work hours than men, and learning how to use AI is no exception. Organisations must build dedicated time to learn how to use AI models at work into the working day, otherwise women will fall behind in their knowledge compared with men. AI capacity must be built into roles, not bolted on, otherwise the gender gap will inevitably widen.”
Post-publishing note: Due to a significant volume of feedback received on this topic, a part two will be published in the coming days.