OpenAI has taken steps toward making ChatGPT safer for teenagers, saying that it will prioritise teen well-being over privacy or freedom. The company admits that AI is already a deep part of young people’s lives, and it wants to set clearer boundaries for safe use. Experts say that this step clearly shows how seriously OpenAI is taking its role in shaping how AI interacts with the next generation.
ChatGPT’s Age Prediction System
OpenAI said it is rolling out an age prediction system to identify users under 18. Teenagers will get a stricter ChatGPT version that blocks graphic sexual content and may involve law enforcement in rare cases of serious distress.
If the system cannot tell a user’s age, it will default to the under-18 version, while adults can verify their age for unrestricted use. This is meant to give teens a safer digital space while ensuring parents and guardians know that harmful material is filtered.
OpenAI stressed that the age-prediction tools will improve over time, especially as it learns from feedback from families and educators.
Parental Controls On ChatGPT
By the end of the month, OpenAI will add parental controls for teens aged 13 and above. Parents will be able to link accounts and set rules, including disabling memory, restricting late-night use, and getting alerts if ChatGPT detects signs of distress.
If parents are unreachable, law enforcement may be contacted as a safeguard. These tools also allow parents to shape how ChatGPT responds, ensuring it fits family values and expectations.
The company said these measures are being shaped with input from experts, policymakers, and advocacy groups to balance protection with usefulness.
In its blog post, OpenAI added that it will continue to work with partners to refine the system, saying this is only the beginning of a long-term plan to create a safer AI experience for teenagers worldwide.