How Artificial Intelligence Is Shaping the 2024 Election Landscape

In 2024, elections are poised to take centre stage as half the world is preparing to vote this coming year. Among them, the United States presidential election is looming particularly large thanks to memories of the tumultuous 2020 Biden versus Trump showdown still fresh in our minds, leaving anticipation running high for what lies ahead.

With several upcoming elections unfolding on the global stage, the integration of artificial intelligence (AI) into the electoral process raises the question: How will AI reshape the dynamics of this coming election year?

AI’s Influence On 2024 Elections

 
ChatGPT can make mistakes. Consider checking important information.”

This is the cautionary message that flashes up next to ChatGPT’s search button. But what has prompted the platform to feel this warning sign is a necessity?

Regrettably, this alert highlights the enormity of the disinformation problem on the platform. Since its inception by OpenAI in November 2022, AI has permeated our daily lives, becoming a primary source of information for many. While this has reaped many benefits, it has also spawned numerous issues with false information dissemination.

The primary concern is the increasing use of Deepfakes and AI-generated voices. Deepfakes refer to fabricated videos wherein a person’s face or body is digitally manipulated, while AI-generated voices can create false audio recordings.

To see the damaging effects of AI-driven disinformation, look no further than the war in Gaza, where TikTok became a harmful hub of misinformation thanks to altered images and mislabeled videos.

Such incidents demonstrated how AI platforms are already being leveraged to propagate fake and harmful information. But who exactly are the culprits of AI

These demonstrate some instances of AI platforms being used to spread fake and even harmful information. But who are the culprits behind this, and why are they doing it?

Meet The “Hacktivists”: Hackers And ChatGPT

 
Let’s turn specifically towards generative AI tools such as ChatGPT, where concerns are rising regarding the avenue they offer hackers to target elections around the world.

According to Time publication, the latest global threats report from CrowdStrike, a cybersecurity company, associated state-linked hackers – so-called “hacktivists” – with ChatGPT and similar AI tools, suggesting these tools are enabling them to carry out an increasing number of cyberattacks and scams.

While countries have always had methods of influencing other nations’ elections, CrowdStrike suggests that advancements in generative AI have given hackers from Russia, China, North Korea, and Iran novel ways to leverage technology against the US, Israel, and European nations.

Their primary strategy: targeting generative AI platforms such as ChatGPT to spread misinformation.

Regarding upcoming elections, Adam Meyers, head of counter-adversary operations at CrowdStrike, warms that spreading disinformation is “really going to democratise the ability to do high-quality disinformation campaigns”.

“Given the ease with which AI tools can generate deceptive but convincing narratives, adversaries will highly likely use such tools to conduct [operations] against elections in 2024,” Meyers stated.

“Politically active partisans within those countries holding elections will also likely use generative AI to create disinformation to disseminate within their own circles.”

But the targeting of ChatGPT by hacktivists isn’t simply something to fear going forward, it appears it’s already happening.

Time reports that “hacktivists” have already achieved some success in influencing forthcoming elections, citing a surge in cyberattacks targeting Taiwanese government offices last month, suspected to be orchestrated by China-linked actors, according to cybersecurity firm Trellix.

Moreover, Russian military intelligence has been targeting Microsoft and OpenAI platforms to gain insight into information that has passed through them, such as satellite communication protocols and radar imaging technologies, as these platforms are commonly used to translate technical papers and store crucial topical information.

This highlights an important point: People are increasingly using platforms like ChatGPT to search for, translate, condense and store information. So much information is passed through ChatGPT that one only needs to imagine how damaging it may be if hackers know how to access and influence this.

Thus, the question remains: How can we preempt such threats?

How Platforms Deal With The AI Threat

 
It is an increasingly critical priority that platforms where AI is used find effective strategies to address the subsequent issue of misinformation.

Turning to the misinformation spread regarding the war in Gaza on TikTok, the social media platform took swift action to curb the spread of false information.

TikTok removed what it deemed to be “violative content and accounts”.

“We immediately mobilised significant resources and personnel to help maintain the safety of our community and integrity of our platform,” a spokesperson said.

TikTok emphasised its “zero-tolerance” intolerance towards the incitement of misleading and dangerous ideologies on the platform.

Time reports that, amidst growing concerns over AI’s impact on upcoming elections, certain tech companies developing AI tools have taken steps to deal with the problem themselves.

Last month, OpenAI unveiled new policies aimed at combating disinformation and misuse of its tools ahead of the 2024 elections, introducing verified news and image-authenticity programs.

Microsoft, too, has asserted its commitment to exploring and testing various AI technologies to implement effective security measures.

So, are we in safe hands?

Ultimately, it is important to proceed with caution.

Despite efforts by AI platforms like ChatGPT to curb misinformation, OpenAI wouldn’t issue a warning message to users unless it was concerned about the platform’s safety and credibility.

Thanks to the unprecedented development in AI practices, it’s only safe to assume that, at least for now, complete safety on these platforms cannot be guaranteed.

Looking ahead to the 2024 election year, individuals will likely be encouraged to refrain from relying solely on AI platforms for information. Indeed, as AI becomes increasingly intertwined with our daily lives, exercising caution and restraint in its usage will be paramount.