Jump to content




Featured Replies

When OpenAI first announced GPT-5.2 last month, it quietly disclosed a new safety feature it called "age prediction." Considering ChatGPT proper isn't exactly an "all ages" kind of tool, it makes sense that users under the age of 18 should have protections in place to shield them from harmful content. The company says that users who indicate they're under 18 already receive an altered experience to "reduce exposure to sensitive or potentially harmful content," but if the user doesn't voluntarily share how old they are with OpenAI, how does the company enforce these protections? Here's where age prediction comes in.

How age prediction for ChatGPT works

On Tuesday, OpenAI officially announced its new age prediction policy, which, like other age verification systems being used by the likes of Roblox, uses AI to guess how old a user is. If the system decides that a particular user is under the age of 18, OpenAI will adjust the experience accordingly, with the goal of keeping all interactions age-appropriate.

Here's how it works: The new age prediction model looks at both the user's behaviors within the app, as well as the general account data. That includes things like how old the account is, what times of day the user is accessing ChatGPT, usage patterns, as well as, of course, the age the user says they are. Looking at all this data, the model determines how old the user likely is. If the model thinks they're over 18, they'll get the full experience; if the model thinks they're under 18, they'll get the "safer experience." If the model isn't confident, it defaults to that safer experience.

What's restricted in the "safer" version of ChatGPT

That limited experience means that someone the model thinks is under 18 will try to reduce the following content types:

  • Graphic violence or gore

  • Viral challenges that might inspire "risky or harmful behaviors"

  • Role play that is sexual, romantic, or violent in nature

  • Self-harm descriptions

  • Content promoting "extreme" beauty standards, unhealthy dieting, or body shaming

The company says that its approach is informed by "expert input" as well as literature discussing child development science. (It's not clear whether how much of that input is from direct interviews and coordination with experts, and how much, if any, is from independent research.) The company also acknowledges "known teen differences in risk perception, impulse control, peer influence, and emotional regulation" when compared to adults.

AI isn't always great at age prediction

The biggest risk with any of these age prediction models is that they'll sometimes get it wrong—hallucination is an unfortunate habit AI models all share. That goes both ways: You don't want someone too young accessing inappropriate content in ChatGPT, but you also don't want someone older than 18 getting stuck with a limited account for no reason. If you experience the latter situation, OpenAI has a solution for you: direct age verification through Persona. This is the same third-party Roblox uses for its age verification, which hasn't gone very well thus far.

That doesn't necessarily spell doom for OpenAI. Roblox tried overhauling their age verification system for a massive user base all used to a certain type of multiplayer experience, which led to users not being able to chat with other users in newly-assigned age categories, which were often incorrect. Meanwhile, ChatGPT's age prediction is only controlling the experience of one user at a time. To that end, OpenAI will let you upload a selfie as an added verification step if the prediction model alone isn't enough. Interestingly, OpenAI doesn't say anything about the option to upload an ID for verification, which other companies, like Google, have provided.

I'm not necessarily a fan of age prediction models, as I think they often sacrifice user privacy in the name of creating age-appropriate experiences. But there's little doubt that OpenAI has to do something to limit the full ChatGPT experience for younger users. Many of ChatGPT's users are under 18, and much of the content they experience is wildly inappropriate, whether it be instructions on getting high, or advice on writing suicide notes. In some tragic cases, minors have taken their own lives after discussions with ChatGPT, leading to lawsuits against OpenAI.

I don't have any great answers here. We'll just have to see how this new age prediction model affects the user experience for minors and adults alike, and whether it actually manages to create a safer experience for younger, more impressionable users.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.