AI Can Replicate Human Writing on Social Media Posts

In a recent study published in Science Advances, researchers have discovered that artificial intelligence (AI) can generate text that appears more human-like on social media platforms than the text produced by actual humans.

OpenAI’s language model GPT-3 has the capability to mimic human writing styles and language patterns so accurately that it becomes challenging for people to differentiate between AI-generated content and human-generated content. GPT-3 was released in 2020 and has since gained significant attention due to its ability to produce highly realistic and credible texts based on user prompts.
 

The Power of AI Text Generators

 
The increasing interest in AI text generators has influenced researchers to set out to explore how AI influences the information space as well as how people perceive and interact with both accurate and false information. They focused on 11 topics that are known for disinformation, such as climate change, vaccine safety, COVID-19, and 5G technology. The researchers generated fake tweets using GPT-3 for each topic, creating both true and false tweets, and collected a random sample of real tweets from Twitter on the same topics.

The researchers conducted a survey with 697 participants from various countries to determine individuals’ abilities to distinguish between fake and organic tweets. The survey presented the tweets to the participants, who had to determine whether each tweet contained accurate information or disinformation and whether it was written by a real person or generated by an AI, and the findings of the study revealed some interesting insights.
 

 

Recognising Disinformation with AI

 
Participants of the study were better at identifying disinformation in tweets written by real users, referred to as “organic false” tweets, compared to tweets generated by GPT-3, known as “synthetic false” tweets indicating that people are more likely to recognise false information when it comes from real users on Twitter while disinformation generated by AI reported more convincing than that produced by humans.

On the other hand, participants were more likely to correctly identify accurate information in tweets generated by GPT-3, referred to as “synthetic true” tweets, compared to tweets written by real users, known as “organic true” tweets—when GPT-3 produced accurate information, people were more likely to perceive it as true compared to accurate information written by real users.

According to the study, people had a difficult time differenciating tweets written by real users from those generated by GPT-3. GPT-3’s ability to imitate human writing styles and language patterns effectively blurred the line between AI-generated and human-generated content. Participants often perceived information produced by AI as more likely to come from a human, proving further the difficulties of differentiating between AI and human-generated content.