ChatGPT’s new age prediction AI model is rolling out globally, but it seems a little overzealous when it comes to automatically setting content filters to “teen mode” to detect who’s under 18.
The goal of using AI to identify underage users and incorporate them into your own version of the AI chatbot has its appeal, especially since ChatGPT’s adult mode is expected to be available soon. OpenAI believes its AI models can infer a user’s likely age based on behavior and context.
But it seems that ChatGPT doesn’t just apply protections to users under 18 years of age. Many adult subscribers were forced to talk to the youth mode version of ChatGPT, with limitations that prevented them from interacting with the AI on more sophisticated topics. It was a continuous process production Since OpenAI started testing the feature a few months ago, that hasn’t stopped wider adoption.
The technical aspect of this feature is unclear. According to OpenAI, the system uses a combination of behavioral signals, account history, usage patterns, and sometimes linguistic analysis to estimate age.
When there is uncertainty, the model plays it safe. In practice, this means that newer accounts, users with late-night usage habits, or those asking questions about topics relevant to teens may find themselves caught in the safety net, even if they have long subscribed to the Pro version of ChatGPT.
AI identification confirmation
At first glance, this appears to be a classic case of good intentions running into clumsy implementation. OpenAI clearly wants to create a safer experience for younger users, especially given the tool’s growing reach into education, families, and youth creative projects.
For incorrectly reported users, the company says the problem is easy to fix. You can confirm your age using a verification tool in Settings. OpenAI uses a third-party tool, Persona, which in some cases may ask users to submit a government-issued ID or selfie video to confirm who they are.
For many, however, the extra click is not the biggest problem. This is because they are misunderstood by a chatbot and have to provide more personal information to refute the accusation.
Requiring identification, even if it is optional and anonymous, raises questions about data collection, privacy, and whether it is a backdoor to stricter age verification policies in the future. Some users now believe that OpenAI is testing the waters for full identity verification under the guise of youth safety, while others fear that the model could be partially trained using the documents they submit, although the company insists this is not the case.
“A great way to force people to upload selfies,” said one Reddit user. wrote.
“If (OpenAI) asks me for a selfie, I will cancel my subscription and delete my account,” another wrote. “I understand why they do this, but look for a less invasive way.”
In a statement on its support page, OpenAI clarified that it does not see the ID or the image itself. Person simply confirms whether the account belongs to an adult and returns a yes or no result. The company also states that all data collected through this process will be deleted after verification and the sole purpose is to correct misclassifications.
The tension between OpenAI’s goal of personalized AI and layering responsive security mechanisms that don’t alienate users is evident. And you may not satisfy everyone with your explanations of what you can infer about someone based on their behavioral characteristics.
YouTube, Instagram and other platforms have tested similar age estimation tools and have all faced complaints from adults accused of being underage. But now that ChatGPT is a regular companion in classrooms, home offices, and therapy sessions, the idea of an invisible AI filter suddenly wearing kid gloves seems particularly personal.
OpenAI says it will continue to refine the model and improve the verification process based on user feedback. But the average user looking for food and wine pairing ideas and being told they’re too young to drink might leave ChatGPT upset. No adult would like to be mistaken for a child, let alone a digital robot.