ChatGPT Adds Age Prediction Features, Here’s How It Works

ChatGPT has started using age prediction on consumer plans to work out if an account is likely to belong to someone under 18. OpenAI says the tool helps place the right safeguards on teen accounts while letting adults use the service as expected, within safety rules.

The company links the decision to do this to its Teen Safety Blueprint and its rules for how models should behave around under 18s. OpenAI says young people deserve tools that support learning and creativity while also protecting wellbeing. Age prediction sits alongside safety tools that already exist for teens who say they are under 18 during sign up.

OpenAI shared details of the rollout as it began turning the system on across accounts. The feature is going live worldwide, with a short delay in the EU to meet regional rules, according to OpenAI.

 

How Does The System Decide Who Is Under 18?

 

ChatGPT uses a model that estimates age using signals linked to an account. OpenAI says the system looks at things like how long the account has existed, the usual times of day someone uses the service, usage patterns over time and the age a user has given.

The company says running the system during the rollout helps it learn which signals work best. Those lessons feed back into updates to improve accuracy over time. OpenAI also says no automated system gets everything right.

Adults placed into the under 18 experience can confirm their age through a check that uses Persona, a third party identity service. The check uses a live selfie or a government issued ID, depending on the country.

OpenAI says users can start this check at any time through the account settings. After Persona confirms someone is 18 or older, the extra safety settings are removed. The change can take a short time to apply across the account.

 

 

What Changes After An Account Is Flagged?

 

Accounts the system sees as likely belonging to someone under 18 get extra safety settings straight away. These settings aim to limit exposure to sensitive material. OpenAI lists graphic violence, gory material, risky viral challenges, sexual or violent role play, self harm content and material that promotes extreme beauty standards or unhealthy dieting.

OpenAI says research on child development shaped these rules. The company refers to known differences in risk perception, impulse control, peer pressure and emotional regulation during the teen years. When age signals look unclear or incomplete, the system defaults to a safer experience.

Teens can still use ChatGPT to learn, create and ask questions. The extra rules mainly change how certain topics are handled. Parents also have tools to adjust the experience further. These controls can set quiet hours, manage features like memory or training, and send alerts if the system spots signs of acute distress.

Privacy is a core part of the design, according to OpenAI. Persona handles the age check and deletes selfies or ID images within seven days. OpenAI does not receive copies of IDs or photos. The company only gets a date of birth or a confirmation that someone is 18 or older, stored under its privacy policy.

OpenAI says people who do not want age prediction can verify their age instead. After that, the system stops running age checks on the account.