Harnessing Data & AI To Protect Gamers From Toxic Behaviour Online

Steve Sobel, head of media and entertainment at Databricks, explores…

 

Video games and online content are an escape for many – a way to relax, be entertained and connect with others. But for some, they’re an opportunity to “troll”. The issue of toxic comments and online bullying has been a problem, with 62% of young gamers having experienced it at some point. Sustained online abuse can cause users to leave platforms and games, making churn one of the biggest factors affecting game studios’ bottom lines. Additionally, the personal toll it takes on users can be extremely damaging.

With the metaverse looming over the future of gaming, tackling this issue has never been more crucial. If we really are heading towards a world where games and online socialising are totally immersive, then finding a way to stop toxic behaviour online must happen – to protect both the game studios, and most crucially,  the wellbeing of players.

 

Making sense of the data

 

One of the major things that game companies need to be able to do is take data from numerous sources – chat, game play, verbal conversations between players, streams, files – and analyse it all. The challenge is not just the multiple data sources, but also the many different data types.

For instance, whilst chat data is often simply text, images and verbal communication is unstructured and unlabeled data – which is typically harder to analyse. Being able to unify all this disparate data in one place, quickly convert unstructured and unlabeled data for analysis, and then actually make sense of all the data, will be key.

 

 

Spotting the difference 

 

It’s also essential for game companies to monitor toxic events in as close to real-time as possible. This enables rapid resolution, even automating a response such as muting players or alerting a CRM to the incident quickly, which can have a direct impact on player retention.

However, it’s also crucial that this speed doesn’t mean there are high false-negative or false-positive rates. False negatives allow toxic behaviour to continue unchallenged, and false positives can wrongfully flag players that aren’t engaging in toxic behavior for removal. Having an in-game toxicity model that is both fast and accurate is a must for a positive user experience.

 

Embracing new architecture 

 

To even begin to tackle any of these steps, gaming organisations need to have a firm handle on their data and must be able to pair this with advanced analytics, like artificial intelligence.

Embracing a new open data architecture, like  a data lakehouse, could be crucial in enabling this. A data lakehouse can store all data – structured, semi-structured and unstructured – and maintain a high level of data quality, performance, security and governance. This means the architecture can support real-time data applications, data science and machine learning in one platform. All of this will be crucial in allowing game companies to build models that tackle in-game toxicity.

The future of gaming is going to look very different. But there will always be those looking to mistreat platforms and cause harm to others. To protect users from online abuse, and to keep them coming back to games, game studios must level-up their in-game toxicity models. This will mean a fair, fun future for everyone in the world of gaming.