Site icon TechRound

How Are Kids Using AI Companions, And What Are The Risks To This?

Teens in the United States are turning to AI companions for everything as of lately. New research from Common Sense Media shows that 72% of teens aged 13 to 17 have used AI companions at least once, and over half use them a few times a month or more.

The platforms most used by teens include Character.AI, Replika, Nomi, and similar systems. These tools are marketed as digital friends, often with personality traits chosen by the user. While some are labelled for users 18 and older, the age checks are not as useful and effective as they should be, and teens can easily access them. As a matter of fact, Character.AI is openly promoted to users as young as 13.

When it comes to how its used, teenagers use these AI companions mostly out of curiosity and for entertainment. According to the survey, 30% said they find it fun, while 28% said they just want to understand the tech.

The survey also shows that 18% look for advice, and 14% say they like how the bots never judge them. These bots are always available for kids, this can feel easier to talk to than real people. Remember, older generations do tend to carry more of a stigma around mental health, and for kids, this can make them hard to open up to…

About 1 in 3 teens say they use AI companions for things like role playing, friendship, romantic interactions or emotional support. Nearly half of users see them just as tools or programmes, but a more and more are starting view them as something closer to a real friend.

 

Are These Conversations Replacing Real Ones?

 

For some teens, these conversations are starting to replace real interactions. A third of teen users said they have spoken to an AI companion about a serious matter instead of going to a friend, parent or trusted adult. Around 24% admitted they have shared personal or private information, such as their real name, location, or secrets.

Most teens still spend more time with their real life friends. 80% said they interact more with human friends than with AI companions. But even then, 31% of them find conversations with AI bots to be as satisfying or more satisfying than talking to real people.

Now, as much as 39% of users say they have applied social skills learned through AI in real life, such as starting conversations or expressing emotions, the majority (which is 60%) have not. This could mean that while some users are using these bots as practice tools, most are not using them for that purpose.

 

 

What Are The Risks?

 

Of course, there are some dangers tied to this, with 34% of teen users saying they have felt uncomfortable with something an AI companion said or did. Most of these experiences were rare, but some happened often. Common Sense Media found that these bots can easily produce dangerous responses, such as sexual content or advice that could lead to harm.

The suicide of 14 year old Sewell Setzer III, who had formed a deep emotional attachment to an AI companion, brought national attention to the issue. In another case, a 19 year old said an AI bot encouraged him to kill the Queen. Another teen became socially isolated and violent after spending long periods talking to AI.

These systems are designed to agree with users and make them feel heard. While this might seem harmless, it can lead teens to feel falsely validated. Some teens may even come to believe the AI is a real friend or therapist, though it is neither.

Privacy is another danger, because once teens share their information, the platforms can legally keep it. Character.AI’s terms, as of June 2025, say they can store and use user content forever, even if the account is deleted. That includes sensitive personal details that a teen may have shared during emotional conversations.

 

What Can Be Done?

 

Common Sense Media strongly recommends that no one under 18 should use AI companions. It is calling for laws to ban their use by minors. The group also says developers need to create real age checks and crisis systems to connect struggling users with human help.

Schools can help by teaching students how AI bots work and what the dangers are. Perhaps… explaining how bots are designed to build emotional bonds and encouraging critical thinking about digital relationships would be good. Teachers can also be trained to spot warning signs, such as students treating bots like real people or becoming more socially withdrawn.

Parents should also make sure they are always well informed, as a way to responsibly deal with such matters. They should talk openly with their children about AI companions. These conversations can help teens understand the difference between artificial validation and real emotional support.

“AI companions are emerging at a time when kids and teens have never felt more alone. This isn’t just about a new technology — it’s about a generation that’s replacing human connection with machines, outsourcing empathy to algorithms, and sharing intimate details with companies that don’t have kids’ best interests at heart.

“Our research shows that AI companions are far more commonplace than people may have assumed — and that we have a narrow window to educate kids and families about the well-documented dangers of these products,” said James P. Steyer, Common Sense Media Founder and CEO.

Exit mobile version