Virtual Influencers: The Next Generation of Information Warfare?

Wasim Khaled CEO at Blackbird.ai explores…

 
Social media influencers are one source from which conspiracy theories and harmful narratives have spread across online platforms in recent years. These public-facing figures can bolster the legitimacy of shared disinformation content given their pre-established credibility and popularity among online followerships.

Influencers can also be used to conceal the genuine source of spurious messaging and create the illusion of authentic communication. Indeed, the co-option of influencers to propagate disruptive narratives online is already a staple of hostile information operations, notably witnessed during the 2016 US presidential election and the COVID-19 pandemic.

Although some human influencers may decline offers to publicise disinformation for pay, a computer program, however, will not.
 

Make noise for the digital human

 
Enter the “virtual influencers” – computer-generated, deep-fake avatars that behave in the same way as human social media personalities, online brand ambassadors, or content creators, aside from the fact they are wholly fictitious creations.

Virtual influencers have a wide range of applications and are not necessarily pernicious by design. Their market appeal is clear: unlike their mortal counterparts, a virtual influencer is under the complete control of its creators, unbound by limitations of time, space, and human agency. The virtual influencer will never misrepresent corporate values; fall foul of personal scandals; or miss a day’s work. Its digital likeness can be infinitely exploited to maximize output across diverse industries and platforms in a way that human capacity and talent cannot.

When it comes to information warfare, however, one key difference between virtual and human influencers poses the most risk to the future information integrity of our online spaces: the application of AI technology to effectively present and communicate disinformation content to new audiences.

The goal of any influencer account, virtual or otherwise, is to maximize user engagement through the provision of content that resonates with potential audiences.
 

 

Digital Humans are gaining trust with AI

 
For a digital avatar, the most effective way to achieve this is through machine-learning algorithms that shape the virtual influencer’s behavior to match shifting trends and follower preferences. AI can process vast amounts of data to identify which content resonates with specific demographics and adjust the avatar’s output accordingly.

AI can also learn how best to inculcate trust and relatability, then integrate this within its communication patterns. Even the most image-conscious human influencer cannot replicate this level of curated detail and personalized appeal embedded within a digital character’s very software.

When this technology is applied to disruptive information operations, the result is a significant force multiplier for networked communications. The use of AI-optimized virtual influencers, therefore, represents the next generation in information warfare: harmful narratives optimized to appeal to audiences not only in terms of what content is shared but also how it is expressed by the avatar sharing it.
 

Uncertainty ahead: Challenges of the new digital reality

 
Virtual influencers’ uncanny valley of hybrid digital-human communication presents further challenges. The increasing sophistication of deepfake technology may leave users unaware that influencer accounts are indeed digital creations owned by entities of unknown agenda. Or conversely, an avatar’s known fictionalized status may create a sense of false security, particularly for younger audiences who may be less likely to question the motives of a CGI character.

Most worrying of all, we are ill-equipped to face these new digital realities. Measures such as Meta’s recently proposed “ethical framework” for virtual influencers will likely be insufficient, given that platforms already struggle to regulate influencer activity, particularly around disclosure requirements for content sponsorship and the propagation of harmful content.

As virtual avatars become more ubiquitous, new tools must be leveraged to identify accounts that use AI and deep-fake technology to aid the spread of malign content online, backed up by robust regulatory systems. The next generation of information warfare is already taking root, unless we act now.