ChatGPT To Allow Adult Content: What Should Parents Do To Keep Children Safe?

New research from consumer research company GWI shows more and more unease parents are facing over children’s online behaviour. Becuase of this, around 41% of parents support of monitoring screen time by giving limits and allocated times, and 72% believe social media harms children. The biggest worries are cyberbullying at 71%, harmful content at 67%, and contact with strangers online at 62%.

Parents are also calling for stronger rules. GWI found that 61% want age verification, 59% support banning smartphones in schools, and 40% favour digital curfews. The findings point to families struggling to manage children’s use of devices as technology becomes harder to avoid.

At the same time, the next generation is already engaging deeply with AI. Among Gen Z users, 23% strongly agree that AI reinforces unrealistic beauty standards. Around 22% use AI tools to talk about mental health, and 20% to discuss personal relationships. Teenagers are evidently forming emotional bonds with tech in ways many parents may not fully understand.

 

Will Adult Content On ChatGPT Worsen This Debate?

 

OpenAI CEO Sam Altman announced that the company will start allowing mature content for verified adult users from December. The has bought even more concern on the debates around digital boundaries and responsibility, especially as children are using online platforms and engaging more.

Sam Altman tweeted, “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realise this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.

“Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.

“In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).

“In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.”

 

What Does This Change Mean?

 

James Clark, AI and Digital Regulation partner at law firm Spencer West LLP commented on this, saying: “The move by OpenAI fits into a wider pattern of AI tools cautiously opening to adult content (see for example X’s Grok), but under heavy regulatory and ethical scrutiny in light of new laws such as the UK Online Safety Act (OSA) and the EU Digital Services Act.

“By offering erotica, ChatGPT could fall into the OSA’s category of “pornographic content providers”. This classification triggers heightened compliance obligations, including proactive risk assessments, transparency and reporting. Established providers of pornographic content have been grappling in recent months with the requirement to age-verify UK users before permitting access to content, and the challenge of doing this in a robust and effective way whilst still preserving privacy and security for users in a sensitive area.

“If regulated as a pornographic content provider, ChatGPT would need to demonstrate that its age-verification mechanisms are effective, privacy-preserving, and resistant to circumvention. Weaknesses could expose it to enforcement by Ofcom.

“There are also distinct risks associated with adult content in an AI context, which aren’t applicable to other forms of adult content. Specifically, AI-generated erotica raises issues of consent, deepfakes, and potential misuse. The UK government and Ofcom are increasingly focused on preventing non-consensual AI pornography, which could spill over into stricter rules for consensual adult erotica too.”

Other experts have shared ways in which parents can keep their children safe from the dangers of this change…

 

Our Experts:

 

  • Samantha Straub, Counsellor and Parent Coach, Teen Savvy Coaching
  • Jessica Plonchak, LCSW, Executive Clinical Director, Choice Point Health
  • Heather Barnhart, Cellebrite Sr. Digital Forensics Expert and SANS Curriculum Lead
  • Leslie Tyler, Director of Parent Education, Pinwheel
  • Richard Ramos, Author and Parenting Expert

 

Samantha Straub, Counsellor and Parent Coach, Teen Savvy Coaching

 

 

“When it comes to keeping kids safe on ChatGPT, it helps to remember that ChatGPT, like the internet or social media, is an environment, not just a tool. And every environment carries both opportunities and risks. Sure, it can feel to the child like they are talking to a single entity, but that entity has access to the full internet, which is absolutely an environment.

“Parents often assume safety is about blocking access, but it’s really about readiness. Before sending your child into any digital environment, ask yourself: “Do they have the judgment, values, and decision-making skills to navigate what they might encounter?” You wouldn’t send a young child, or even some teens, to an unsupervised party if they weren’t ready to handle the social risks. The same principle applies online.

“A child’s readiness depends on many factors, but two of them are critical: their ability to make solid decisions when left to their own devices, and the strength of their communication pattern with you. If your relationship is open and your child confides in you, they’re more likely to come to you when something confusing or inappropriate happens online.

“When introducing ChatGPT, model how to use it. Sit beside them as they explore. Just like medical students learn through “see one, do one, teach one,” kids can start by watching you use the tool, then try it with your support, and eventually show you how they’re using it responsibly.

“If your child is older and you’re just beginning these conversations, it’s not too late, but expect some pushback. Approach it as a partnership rather than surveillance, and emphasise that your goal is to keep them both curious and safe.”
 

 

Jessica Plonchak, LCSW, Executive Clinical Director, Choice Point Health

 

 

“ChatGPT has become a new tool for children as part of their learning curiosity, so parents should ensure open communication, proper guidance, and supervision.

“Parents should know that banning these platforms is not the solution. They are highly advised to co-use ChatGPT and similar platforms by exploring together and discussing what is and is not appropriate. This will help parents build digital literacy with their children and encourage them to make the best use of skills such as critical thinking and empathy. Parents should also understand that the real safety issue is not associated with the exposure of adult content only, in fact, it’s the lack of emotional readiness to process this kind of complex information.”

 

Heather Barnhart, Cellebrite Sr. Digital Forensics Expert and SANS Curriculum Lead

 

 

“AI tools like ChatGPT are becoming part of everyday life and parents need to take an active role in guiding how their children use them. This starts with enabling parental controls and having open conversations about what’s appropriate to ask or share online. For example, set up clear rules such as not taking the phone in private places such as the bathroom and no video chats in the bedroom.”

“Kids need to be taught to assume everything online is fake. Unless they are physically with that person, they must assume what’s being shared is not real – even if the account is a known friend, it could be an older sibling or cousin who has access. ”

“Parents should also stay informed through resources from the National Centre for Missing & Exploited Children, the SANS institute family resources and law enforcement programmes. With proactive monitoring, education and engagement, families can help their children navigate AI tools responsibly and safely.”

 

Leslie Tyler, Director of Parent Education, Pinwheel

 

 

“First, there is no way to “ensure” safety on chatGPT. But if parents decide to allow their teens to use ChatGPT (the minimum age is 13), they should take advantage of the recently added parental controls. OpenAI says that explicit content will be limited with these teen accounts, and images can be blocked.

“Second, parents should stay in the loop with how their kids are using AI chat apps. Are they looking up information, creating images or writing papers, or chatting with an AI girl/boyfriend? All of these activities carry different kinds of risks, such as substituting for real relationships or cheating in school. So parents need to communicate with their children about what they are doing, what they are learning, what the AI says. This can be done in a co-learning, curious way where parents are listening for concerning patterns like relying too much on what AI says.”

 

Richard Ramos, Author and Parenting Expert

 

 

“As technology evolves, so must our parenting skills. AI platforms like ChatGPT can be powerful tools that support everything from time management to school research but they’re not digital babysitters, and while there might be a manual of sorts, we’re all navigating new waters and dangers.

“Parents need to be proactive: talk openly with their kids about what they’re accessing, set clear boundaries, and monitor their usage. This isn’t about a lack of trust; it’s about understanding the realities of the digital world and staying aware of what children are sourcing. Safety isn’t just about filters, it’s about trust, education, and consistent communication. When we build those values into both our parenting and our use of technology, we create a true foundation for safety.”