Meet Matthieu Boutard, President and Co-Founder at Expert Moderation Solution Company: Bodyguard.ai

Bodyguard began life in 2019 as a phone app enabling users to filter out toxic content on social media platforms. It was born from a desire to tackle racism, homophobia, cyberbullying, and other forms of bigotry or violence that people can experience online. The technology has now grown into an artificial intelligence platform that can be applied by organisations or brands to moderate content on their social media platforms, website chatrooms, or even inside chat functions within video games.

Our vision is to preserve freedom of speech while also protecting people from hateful or toxic content polluting their online experience. Our clients include the French Professional League, video content creators, Jellysmack and gaming company Paradox as well many other brands and businesses. We see many passionate opinions expressed creatively online that we are relied upon to ensure remain palatable, without ever censoring anyone.
 
 
Bodyguard - Choose positive engagement
 

What do you think makes this company unique?

 
Many organisations offer moderation services but few can cope with the scale of online commentary that Bodyguard is able to, and with such extraordinarily high accuracy. We free people from the horrible task of having to wade through millions of potentially toxic comments using AI that gets more knowledgeable and accurate with time.

It’s smart enough to tell terms of endearment from insults, or spot long-term relationships between internet users that mean what’s eye-watering for others might be ‘normal’ for them. It can protect individuals, platforms, or entire brands in all their online properties from the problems of toxic content or its backlash, identifying about 90 per cent of it with an extremely low error rate.

With a human and AI process, using a linguistics team comprising both men and women from around the world, we avoid inherent bias. Plus, we own our intellectual property so we have full control over it. We’ve also recently released our first Online Toxicity Barometer – a comprehensive analysis of over 170 million comments, across 1,200 brand platforms, in six languages, over a year. We identified and categorised almost 9 million instances of toxic or harmful content to show how brands are besieged by online negativity.
 

 

How has the company evolved over the last couple of years?

 
Charles Cohen and I have ‘big tech’ backgrounds but felt that the issue of online toxicity wasn’t likely to be solved at those kinds of corporations. We met when I was at Google.org and Charles was applying for funding there. Between us they managed to raise over €2 million in 2019 that enabled us to recruit the linguistics team to work with the AI, and by January 2021 Bodyguard had grown into a B2B AI product with its first 20 customers in media, sports clubs, gaming platforms, and social networks industries.

By 2022 we were partnering with Keen Venture Partners (UK), Ring Capital (FR), and Starquest Capital (FR), for a further €9 million that’s taking us from our native France into the UK.
 

What can we hope to see from Bodyguard in the future?

 
We’ll be making the AI compatible with more languages than the original six (English, French, Spanish, Italian, Portuguese, and German), as well growing from just text moderation to audio and video content.

Our intention is to ensure the AI can deal with all existing forms of online expression as accurately as possible before we have to teach it about physical expressions, body movements, hand-signs etc for the metaverse. We already have the functionality to enable an individual user to have full autonomy over the degree of moderation applied to their account. For our b2b clients, we work with them to precisely adapt their moderation levels to the needs of their specific business and community. If you want to see everything, you will!