Hundreds of Deepfake Video Ads Are Being Created Around UK Elections

According to a study by Fenimore Harper, more than 100 deepfake videos featuring Rishi Sunak flooded Facebook last month. This wave of AI-generated content poses a serious threat ahead of the upcoming general election. Marcus Beard, the founder of Fenimore Harper and a former Downing Street official, expressed alarm about the potential impact of these deceptive tactics on the democratic process.

The deepfake videos went beyond impersonating Rishi Sunak; they included fabricated footage of BBC newsreader Sarah Campbell reporting on a fictional scandal involving Sunak. This disinformation reached an estimated 400,000 people, potentially influencing public opinion. The study found that £13,000 was spent on 143 adverts originating from 23 countries, revealing a concerning global dimension to this AI manipulation.

 

Election Integrity at Risk

 

Marcus Beard, who played a key role in countering conspiracy theories during COVID, emphasised the lax moderation policies on paid advertising platforms, especially on Facebook. He warned that the quality of deepfakes has evolved, making them more challenging to detect. Beard underscored the urgent need for robust measures to counter the manipulation of elections through AI-generated falsehoods.

Despite the AI-generated content violating several of Facebook’s advertising policies, the study noted that only a fraction of the ads encountered had been removed. The threat is not just technological but also regulatory, as existing moderation mechanisms appear inadequate to tackle the growing menace of AI-driven deception.

 

 

Overwhelmed Moderation and Global Campaign

 

Facebook, a platform inundated with over 100 deepfake paid advertisements, faces challenges in curbing the spread of deceptive AI content. The deepfake campaign targeting Rishi Sunak and BBC newsreaders marked a shift, not only in scale but also in the scammers’ tactics. Operating from 23 different countries, these perpetrators overwhelmed Facebook’s moderation efforts by repeatedly uploading the same content, accumulating thousands of views with each attempt.

Meta, Facebook’s parent company, acknowledged the issue, stating that they remove content violating their policies. However, the sheer volume of deceptive ads exposes the limitations of current moderation systems. The spokesperson mentioned that less than 0.5% of UK users saw any individual ad that did go live, but with over half of the UK population on Facebook, even a small percentage can significantly impact public perception.

 

Global Democracy and the AI Threat

 

This couldn’t have come at a worse time considering that this year, national elections are taking place in over 40 different countries, which is over 40% of the global population. The interplay between advanced technology, social media platforms, and global elections is creating an intricate web of challenges that demand immediate attention.

 

Government and Platform Responses

 

The UK government, in response to the alarming findings, assured the public of its commitment to safeguarding democratic processes. A government spokesperson highlighted ongoing efforts through the defending democracy taskforce and dedicated teams. The recently enacted Online Safety Act imposes new requirements on social platforms to swiftly remove illegal misinformation and disinformation, including that generated by AI.

The BBC, a prominent target of the deepfake campaign, emphasised the importance of news consumers verifying information from trusted sources. The BBC’s investment in BBC Verify, a specialised team countering disinformation, reflects the commitment to maintaining the integrity of information in an era of increasing falsehoods.