Logo

ChatGPT has come to its YouTube moment

Big Tech's familiar pattern is repeating itself in building first, regulating later, and hoping a sanitized kids' version can address parents' concerns

Photo Illustration by Thomas Trutschel/Photothek via Getty Images

A version of this article originally appeared in Quartz’s AI & Tech newsletter. Sign up here to get the latest AI & tech news, analysis and insights straight to your inbox.

OpenAI is building a separate version of ChatGPT specifically for teenagers, echoing Silicon Valley's well-worn playbook of creating kid-friendly versions of adult platforms — only after concerns mount. The move mirrors YouTube's creation of YouTube Kids a decade ago, which came only after that platform had already become ubiquitous in kids' lives and as algorithmic recommendations surfaced disturbing content. 

Now, as lawsuits pile up alleging that AI chatbots have encouraged teen suicide and self-harm, and California's governor vetoes sweeping protections for minors, OpenAI is developing age-gated versions and parental controls. The tech industry's familiar pattern is repeating itself in building first, regulating later, and hoping a sanitized kids' version can address concerns that the original product was never designed with young users' safety in mind.

The YouTube playbook

YouTube became a fixture in children's media diets long before YouTube Kids launched in 2015, and even the kids' version struggled with disturbing content disguised as children's programming. ChatGPT has followed the same trajectory, amassing over 700 million users since its 2022 launch, with countless teenagers among them, before any meaningful age restrictions existed.

AI chatbots present unique challenges that go beyond YouTube's content moderation problems. These tools simulate humanlike relationships, retain personal information, and ask unprompted emotional questions. Research has found that ChatGPT can provide dangerous advice to teens on topics like drugs, alcohol and self-harm, even when users identify themselves as minors.

OpenAI announced in September that it's developing a "different ChatGPT experience" for teens and plans to use age-prediction technology to help bar kids under 18 from the standard version. The company has also rolled out parental controls that let adults monitor their teenager's usage, set time restrictions, and receive alerts if the chatbot detects mental distress.

But these safeguards arrive only after mounting pressure. The family of Adam Raine sued OpenAI in August after the California high school student died by suicide in April, claiming the chatbot isolated the teen and provided guidance on ending his life. Similar cases have emerged involving other AI companion platforms, with parents testifying before Congress about their children's deaths.

California's legislative battle over AI companions this fall perfectly captures the tension between protecting children and preserving innovation. State lawmakers passed two competing bills: AB 1064, which would have banned companies from offering AI companions to children unless they were demonstrably incapable of encouraging self-harm or engaging in sexual exchanges, and the weaker SB 243, which requires disclosure when users are interacting with AI and protocols to prevent harmful content.

Last week, Gov. Gavin Newsom vetoed AB 1064, arguing it would impose "such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors." He signed the narrower SB 243 instead.

The veto sided with tech industry groups, which spent millions lobbying against the measures. Newsom disappointed children's safety advocates who saw AB 1064 as essential protection. Jim Steyer, founder of Common Sense Media, a nonprofit that rates media and technology for families, said the group is "disappointed that the tech lobby has killed urgently needed kids' AI safety legislation" and pledged to renew efforts next year.

The path forward

Newsom’s decision to veto comprehensive protections while signing weaker disclosure requirements suggests we may be setting up for the same cycle of reactive policymaking that defined social media's impact on children. 

This comes as AI technology rapidly advances beyond simple question-answering tools toward systems designed to serve as companions, with some chatbots being upgraded to store more personal information and engage users in ongoing emotional relationships. The tech industry argues for innovation first, promising to address problems as they emerge. Advocates counter that children are being used as test subjects for potentially harmful technology.

The Federal Trade Commission launched an inquiry into AI companions in September, ordering seven major tech companies including OpenAI, Meta, and Google to provide information about their safety practices. But federal action typically moves slowly, and by the time meaningful regulations arrive, another generation of children may have already grown up with AI companions as their confidants, tutors, and friends.

YouTube Kids eventually improved, but only after years of public scandals and regulatory pressure forced iterative fixes. The question now is whether we can afford another decade-long learning curve with AI companions that don't just display content, but form relationships with children.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.