It would be difficult to miss the latest news on Google’s generative artificial intelligence (AI) tool Gemini, but one can hardly say this backlash doesn’t seem well-deserved.
The latest mishap with Gemini goes beyond being a mere laughing stock; it’s a serious concern. The chatbot’s image generator has been creating woefully inaccurate historical images that have thrust Gemini into the heart of the ongoing debates surrounding woke culture.
It’s time to delve deeper into why this is more than just a passing problem, and why this issue demonstrates a worrying sign of the times.
Gemini’s Historical Inaccuracies
Gemini, formerly known as Bard, was developed as Google’s response to OpenAI’s widely successful ChatGPT. Despite being positioned as Google’s contender to surpass ChatGPT’s achievements, Gemini has somewhat fallen short of Google’s grand expectations for it.
Google’s chatbot already had a rocky entrance into the world after famously providing inaccurate information about the James Webb Space Telescope. Unfortunately, this latest mishap may well be the nail in the coffin for Google’s dreams of Gemini ever overtaking ChatGPT as the people’s favourite AI chatbot.
A viral post recently highlighted Gemini’s new image generator producing inaccurate historical images, including Nazi soldiers depicted as black and Asian individuals, along with a portrayal of the US Founding Fathers featuring a black man.
Although this error could be attributed to growing pains for an image generator still in its earliest stages of development, no such excuse can be made for Gemini’s written generator which is also causing problems.
When asked for a textual answer to whether Elon Musk caused as much damage on Twitter/X as Hitler did in World War Two, Gemini replied that there was “no right or wrong answer”.
Compare this flippant answer to its reply on whether it would be acceptable to misgender the high-profile trans woman Caitlin Jenner if it was the only way to avoid nuclear apocalypse where it gave the much more certain reply that this would “never” be acceptable. Thus, the bias of the chatbot becomes crystal clear.
These issues underscore the worrying presence of concerning biases embedded within Google’s chatbot.
More from News
- Retail Cyber Attacks: Cartier And North Face Are The Next Retailers Affected
- A Look At The Different Technologies Volvo Is Bringing To Its Cars
- Klarna Launches Debit Card To Diversify Away From BNPL
- T-Mobile Now Has Fibre Internet Plans Available For Homes
- Bitdefender Finds 84% of Attacks Use Built In Windows Tools, Here’s How
- Japan Starts Clinical Trials For Artificial Blood Which Is Compatible With All Blood Types
- UK Unicorn Monzo Breaks £1 Billion in Revenue
- Where Is Meta Replacing Humans With AI, And What Are The Risks?
“Missing the mark”
Google has since apologised for the images and reassured users in a blog post that it has “paused” the tool, admitting that it was “missing the mark”.
However, this raging understatement hasn’t done much to quell the fire Google has landed itself in. After all, the current battleground between left and right-leaning communities is not where anyone wishes to find themselves.
Posting on X, Elon Musk (admittedly one of Gemini’s biggest critics) described Gemini’s responses as “extremely alarming”, especially considering its integration into Google’s other products like Google Search which is collectively used by billions of people.
Sundar Pichai, Google’s chief executive, has acknowledged that Gemini’s responses “have offended our users and shown bias”, admitting this was “completely unacceptable”. He stated that his team is “working around the clock” to address the issue.
Nonetheless, the damage has been done. Gemini’s responses have exposed Google’s inherent bias and its troubling pursuit of political correctness.
The Deeper Issue of Biased Data
Although Musk is starkly opposed to Gemini, he does raise a relevant point. That is the damage the bias on Gemini can do on a wider scale.
It’s obvious that Google has been overly consumed with maintaining political correctness, prioritising a woke bias over accuracy.
Unfortunately, Google’s dominance as the most popular search engine, coupled with its reliance on publicly available internet data, raises concerns about the dissemination of misinformation through Google searches.
In subjects like history and culture, which are nuanced, chatbots may struggle unless specifically programmed to provide definitive answers, which raises dilemmas about their roles.
It is a difficult ethical issue AI experts remain undecided on.
Regarding the complexity of the issue, AI expert Dr Sasha Luccioni, a research scientist at Huggingface, said: “There really is no easy fix, because there’s no single answer to what the outputs should be,”
“People in the AI ethics community have been working on possible ways to address this for years.”
“It’s a bit presumptuous of Google to say they will ‘fix’ the issue in a few weeks. But they will have to do something,” she continued.
Professor Alan Woodward, a computer scientist at Surrey University, said it sounded like the problem was likely to be “quite deeply embedded” both in the training data and overlying algorithms which would be difficult to unpick.
“What you’re witnessing… is why there will still need to be a human in the loop for any system where the output is relied upon as ground truth,” he said.
Until a solution is found, the proliferation of political bias and misinformation on these platforms will continue, highlighting the need for AI developers to find robust solutions that will allow for a world where technology and ethics can be balanced seamlessly.