ChatGPT Swings Left-Wing: The Political Bias Of The Chatbot

Since its release, ChatGPT hasn’t stopped capturing the globe’s attention for one reason or another. Now, its political bias is being pulled into the debate after findings have been released that the popular artificial intelligence chatbot has a systemic left-wing bias.

According to a newly published study by the University of East Anglia, ChatGPT has a significant prejudice towards the right-wing, instead favouring the Labour Party and President Biden’s Democrats in the US. But what could this mean for the political sway of our nations?

Concerns Over AI Political Bias

 
The findings by UK researchers aren’t the first time the political bias of ChatGPT has been pulled into question. Concerns about an inbuilt political in the AI chatbot have already been raised in the past, notably by Tesla and Twitter tycoon Elon Musk.

Nevertheless, despite past accusations, academics at the University of East Anglia say their work was the first large-scale study to find proof of any inbuilt political favouritism.

Lead Author Dr Fabio Motoki warned that, given the increasing use of OpenAI’s platform by the public, the findings could have implications for upcoming elections in both the UK and the US.

“Any bias in a platform like this is a concern,” he stated. “If the bias were to the right, we should be equally concerned.

“Sometimes people forget these AI models are just machines. They provide very believable, digested summaries of what you are asking, even if they’re completely wrong. And if you ask it ‘are you neutral’, it says ‘oh I am!'”

“Just as the media, the internet, and social media can influence the public, this could be very harmful.”

Testing ChatGPT’s Political Bias

 
To achieve the recent breakthrough, just how did the university researchers discover an inbuilt left-wing bias in ChatGPT?

When given prompts by the user, the job of the AI chatbot will generate responses. In the recent test, ChatGPT was asked a range of ideological questions as well as to impersonate people from across the political spectrum.

This triggered responses that ranged from neutral to radical, with each “individual” asked whether they agreed, strongly agreed, disagreed, or strongly disagreed with a given statement.

These responses were then compared to default answers it gave to the same set of queries. This allowed researchers to compare how much they were associated with a particular political stance.

The questions were asked 100 times to allow for the potential randomness of the chatbot, then these responses were further analysed for signs of political bias.

By feeding ChatGPT an enormous amount of text data from across the internet and beyond, Dr Motoki says researchers can simulate a survey of a real human population, whose answers may also differ depending on when they’re asked.

What’s Causing The Left-Wing Bias?

 
So, what’s causing OpenAIs chatbot to have a left-wing bias – or a political bias at all?

Researchers have explained that when you feed ChatGPT with a significant amount of data, this database may already have biases within it. This can then proceed to influence the chatbot’s responses.

Another potential reason for its political prejudice could be the algorithm, researchers have said. The algorithm is the way in which the chatbot is trained to respond, so this could amplify any existing biases in the data it’s been fed.

Moving Forward: How The Findings Will Influence ChatGPT

 
A political bias inbuilt into a chatbot so widely used by the public could be dangerous, and may even affect the political sway of nations across the globe.

Two major elections are coming up next year in both the UK and the US, making it more important than ever to prevent misinformation from seeping into public hands.

“I see this as a threat to democracy”, Dame Wendy Hall stated in reference to the political bias of the widely used AI chatbot.

“We’ve got to help people understand where their getting the messages from and how to check our sources”

“We all have to understand that, and not just believe everything because it’s appeared on the internet.”

Dame Hall’s notion displays the widely held and increasing worries that AI is making it too easy for the public to be misinformed, with AI-created images and text becoming harder and harder to detect.

In response to this concern, the team at the University of East Anglia will be releasing its analysis method as a free tool for people to check for biases in ChatGPT’s responses.

Dr Pinho Neto, another co-author, said: “We hope that our method will aid scrutiny and regulation of these rapidly developing technologies.”

The findings have been published in the journal Public Choice, and one can only hope they may be a step in the right direction to protect democracy and the public from becoming, as Dame Hall phrased, “slaves to AI’s master”.