Google To Tighten Rules Around Political Ads Made With AI

In a move to address the rising threat of deceptive content, Google has announced strict requirements for political advertisements on its platforms.

The tech giant is set to demand marketers disclose when political ads employ artificial intelligence (AI) to generate images and audio. This decision comes in response to the rise in AI-driven tools capable of creating convincing digital content, which has raised concerns about the potential for misinformation and disinformation in political campaigns.

 

Countering the AI-Driven Disinformation Surge

 

A spokesperson from Google revealed that the new policy, set to take effect in November, is aimed at combating the expansion of AI-powered disinformation tactics. This initiative will be implemented approximately a year ahead of the next United States presidential election, reflecting Google’s commitment to ensuring transparency and combating the spread of deceptive political content.

While Google’s existing ad policies already prohibit the manipulation of digital media to deceive or mislead individuals in regards to political matters, the upcoming update specifically targets election-related ads.

These ads will be required to “prominently disclose” the use of “synthetic content” that portrays real or lifelike individuals and events. Google suggests using labels like “this image does not depict real events” or “this video content was synthetically generated” to alert viewers to the presence of AI-generated content.

 

 

Banning False Claims to Preserve Election Integrity

 

In addition to demanding disclosure of synthetic content, Google’s ad policy explicitly prohibits false claims that could undermine public trust in the election process. This approach seeks to maintain the integrity of political discourse on Google’s platforms.

Google’s commitment to transparency in political advertising extends beyond synthetic content disclosure. The company already requires political ads to reveal their sponsors and provides access to ad information through an online library. The new policy ensures that any digital alterations in election ads are presented in a “clear and conspicuous” manner to ensure that users are informed.

 

Defining Synthetic Content

 

To clarify the types of content requiring disclosure, Google outlines examples, including images and audio generated by AI that depict individuals saying or doing things they did not actually do or events that never occurred. These guidelines aim to encompass a wide range of potentially misleading synthetic content.

Recent incidents have underscored the urgency of addressing AI-generated misinformation. In March, a fabricated image of former US President Donald Trump being arrested circulated on social media, created using AI tools. Similarly, a deepfake video of Ukrainian President Volodymyr Zelensky discussing surrender to Russia emerged in the same month.

In June, a campaign video for Ron DeSantis, attacking former President Trump, featured images altered with AI to depict Mr. Trump embracing Anthony Fauci with kisses on the cheek. These instances serve as stark reminders of the ever-evolving capabilities of generative AI and the potential for misuse.

 

Google’s Ongoing Efforts

 

In response to the growing challenges posed by AI-driven disinformation, Google emphasises its ongoing investment in technology to detect and remove such content. This commitment reflects the company’s dedication to maintain the integrity of its platforms and safeguard democratic processes.

As the November deadline approaches, Google’s proactive stance on political advertising transparency is poised to set a precedent in the tech industry, reinforcing the importance of ethical AI use and mitigating the risks associated with AI-generated synthetic content in the political landscape.