Does Fake News Underscore The Need For A New Business Model For Social Media?

Meta, the parent company of a number of leading social media platforms including Facebook and Instagram, is rolling out new tools to combat disinformation ahead of the French presidential elections scheduled in April. WhatsApp users, for instance, can now flag suspected posts for fact-checking by Meta’s media partners such as Agence France-Presse (AFP). Facebook and Instagram, meanwhile, are running awareness campaigns helping users spot misinformation, such as conspiracy theories and scams.

Any initiatives to fight disinformation are welcome, given how the last few years have shown just how vulnerable online audiences are to conspiracy theories around topics including COVID-19 and vaccines and how an ads-based business model amplifies this kind of extremist content. While some social media firms such as youth-centred livestreaming app Yubo have crafted ads-free business models which are naturally suited to curbing the dissemination of disinformation, many large technology companies have struggled to hit on effective ways to respond to fake news.

Despite efforts to flag suspect posts or provide access to resources from more reputable organisations, the spread of disinformation and conspiracy theories on major social media platforms seems to have accelerated, instead of slowing down.

Disinformation in Action

Indeed, just in the last few weeks there has been growing evidence of foreign meddling and organised disinformation behind the ‘trucker convoys’ that paralysed the Canadian capital of Ottawa and have sprung up in other locations around the world. Despite apparently starting as a Canadian protest movement, researchers pointed out that the protest was being promoted by, amongst others, pro-Trump political groups in the United States.

Facebook groups which had previously promoted anti-vaccine conspiracy theories, quickly changed their names to include keywords such as ‘freedom’ and ‘convoy’ and began calling for similar protests in the United States. Soon articles and videos containing disinformation around the protests were liked, viewed, shared and reshared across the internet.

The kicker? Many of these groups were in fact run by fake accounts based in Vietnam, Bangladesh and Romania. The accounts had merely pivoted their inflammatory rhetoric from a pro-Trump agenda to one that focused, for the moment at least, on trucker protests. Perhaps unsurprisingly, the posts have been shared hundreds of thousands of times, with one viral video, posted by a Bangladeshi content farm, receiving nearly a million views.

These clickbait farms, many of them based in developing countries, have an obvious incentive -money. The articles are monetised through advertising programs such as Facebook’s Instant Articles and Google’s AdSense. Content farms are incentivised to post inflammatory content in order to generate maximum shares. If an article goes viral on one platform, it is often recycled and reposted on another. The more these articles are shared, the more revenue is made through advertising sales. The result is an information ecosystem that rewards disinformation, distrust and anger, and feeds chaos.

It’s not just content farms. Defence researchers have shown how Russian state media, such as RT, and Russian disinformation operations, such as the notorious Internet Research Agency, have honed in on the trucker protests, amplifying division, inflaming rhetoric and spreading false information to a receptive, angry audience.

These content farms, and even the Russian propaganda machine, are a vector for disinformation, while the root of the problem is a business model relying on advertising revenue and algorithms which boost whatever is engaging to users, even if it’s low-quality clickbait or blatant disinformation.

Some Apps, Such As Yubo, Carving Out A Different Path

It does not have to be this way; social media does not have to be a fertile ground for disinformation and hate speech.

Some social media platforms have consciously chosen a different business model. The social media network Yubo, for example, is particularly cautious about safety issues given its userbase of teenagers and young adults. Yubo has rolled out a swath of safety features including the ability for users to block words, emojis or phrases and a technological solution to verify users’ ages to ensure that they’re eligible to be on the platform.

Yubo has also deliberately chosen not to have ads, naturally curbing instances of disinformation and hate speech while relying on a ‘freemium’ business model and investing heavily in content moderation.

Yubo’s users also can’t “like” content or “follow” other users. The reasoning is simple: to encourage positive interactions and discourage the spread of viral, but divisive and harmful, content. And by removing something as simple as a “follow” feature, Yubo stops its userbase from devolving into a small group of influencers performing for a passive group of consumers and endeavours to keep the platform a space for forging genuine connections.

Many of the big names in social media, however, have yet to follow Yubo’s example, and have instead stuck to a business model anchored in microtargeted advertising, even as it’s become harder to defend.

How many reports of widespread disinformation have to surface before these social media giants carry out a comprehensive overhaul of their business models instead of making small incremental tweaks such as the additional features Meta is implementing ahead of the French elections? Social media is here to stay—but more platforms need to reinvent their business model to curb the spread of disinformation and harmful content.