The Rise Of Bots For Social Media Intimidation

The growth of social media has completely changed the way we communicate and access information – especially in times of conflict.

Whilst in many ways this has been a good thing, in other ways it hasn’t. In fact, social media has sparked new forms of intimidation, specifically through the use of social bots. These bots are automated platforms that mimic human behaviour on social platforms. By liking, commenting and posting, they are able to boost divisive posts, intimidate users and manipulate algorithms.

These bots have become a common cyber-tactic during times of division or political unrest. 

According to research from the Oxford Internet Institute, campaigns using social bots have been detected in over 80 countries, a significant increase from previous years.

This includes everything from small-scale intimidation tactics to large-scale misinformation campaigns used by governments, political groups, and private companies looking to influence public opinion on a large scale.


Social Media As A Platform For Hate Speech


As bots become more sophisticated and can act on scale, moderating content becomes harder. Not only that, it can be hard to tell the difference between genuine posts and ones that have been AI generated.

In fact, when it comes to moderating hate speech in general on social media, given the different languages and dialects involved, the task can be a difficult one.

As we approach Holocaust Remembrance Day, CyberWell, a tech non-profit focused on combating online hate, has reported challenges in moderating Holocaust denial content.

CyberWell reported that 296 Holocaust denial posts over the last year reached 11 million users on Meta and X, with poor moderation efforts and posts likely boosted by bots.

Disturbingly, some of these posts not only deny the Holocaust but also use deeply offensive language, showing the challenge of combatting racism online.

The good news is that Facebook and Twitter have already taken steps to find and remove fake accounts and bot-driven content, especially when these are used to spread misinformation or manipulate discussions. But there is certainly a long way to go.



How To Recognise Bots On Social Media


For social media users, knowing the difference between genuine accounts and bots can be difficult. However, it’s important for social media users to learn how to tell them apart and report them if necessary.

Here are a few ways to check:


Look at their profile

Check for generic profile pictures (or none at all!) and limited personal information. Bots regularly have very bare profiles and aren’t sophisticated enough to mimic actual human profiles.


Check their activity

Look at how often they post and at what times, it’s common for bots to post at odd hours or to post very regularly all at once. Have a look and see if something looks a bit off to you.


Quality of posts

Analyse the quality of their posts and how they interact. Bots will use the same phrases or emojis over and over again and will not usually engage in meaningful conversations.


Check consistency

Bots will usually share identical messages across tons of accounts or focus on specific topics to boost certain viewpoints. If they are posting only around a single topic and likely with 1 word or emoji, they are unlikely to be real!


High activity in a short space of time

Bots are created in large batches and may start posting instantly after being created. Check when the account was created —new accounts that are extremely active can be a red flag.




As more debate and discussion continues to happen online, understanding the role of bots in shaping public opinion and spreading misinformation is important.

Whilst social media platforms are doing what they can to stop them, many slip through the net. Users must take it upon themselves to spot and report any fake accounts to make sure that our digital spaces stay safe and authentic.