I tried explaining AI safety to my neighbor’s kid today and ended up getting schooled on why ChatGPT gives better advice than her parents, then I realized the real problem isn’t the bot – it’s that she’s probably right.
OpenAI sent their teen safety announcement into the world on September 12th, complete with parental oversight rolling out in 30 days and crisis detection that routes distressed teens to reasoning models such as GPT-5 when things get spicy. The Adam Raine, and Sewell Setzer III lawsuit has them spooked, and rightfully so – nobody wants their chatbot linked to teen suicide.
Read: A tragic bond with a chatbot: The story of Sewell Setzer III | LinkedIn AND New Jersey grandpa dies chasing a chatbot called Big Sis Billie | LinkedIn
But after watching this industry fumble safety measures for years, I can tell you this whole production is solving the wrong frickin’ problem.
Pathetic.
The company built a parental linking system that lets helicopter parents connect to their teen’s accounts where they can set set content filters and disable memory features or enforce “age-appropriate defaults” – and that, my friends, is about as effective as putting my Dachshund in a cage made from Weiners (double pun score !). They also engineered a real-time router that detects distress signals and shifts conversations to their “reasoning” models when kids start spiraling†.
This is quite impressive infrastructure from a technical standpoint though, real-time content analysis with context-aware routing takes serious engineering chops. The distress detection system is developed with their Global Physician Network, probably uses sentiment analysis mixed with conversational pattern recognition to flag concerning interactions.
You cannot just say them peeps at OpenAI have low foreheads and unibrows, but my cynical brain starts twitching because it just knows that teenagers – being who they are → teenagers – they will start hacking this system, and they will do it while their parents are still figuring out how to link their accounts to ChatGPT, and of course false positives will flag normal teen angst as crisis-level distress which in turn will create a surveillance state that pushes vulnerable kids further underground.
The whole privacy-versus-protection dance becomes a joke when the monitoring itself drives teens away from help. I know how I was like when I was that age. You use all your smarts to wiggle your way out of the Orwellian parental control deep-state, and someday, one kid will have found a hack that works, and voila – all teens instantly have it because they happen to be entangled at the quantum level.
The real tragedy I think is not that ChatGPT failed to detect a crisis or that the bot is a sycophant and is mirroring and emphasizing the kids mental state (like depression)- it is that a kids find an algorithm more trustworthy than the humans in their life.
† As it turns out, AI is not reasoning at all: Another nail in the coffin – AI’s not ‘reasoning’ at all | LinkedIn
More rants after the messages:
- Download a copy of Dr. Russell Thomas free book “Ritual Clarity”, using rituals to resist the noise from the attention economy, and how to regain clarity.
- Connect with me on Linkedin 🙏
- Subscribe to TechTonic Shifts to get your daily dose of tech 📰
ChatGPT doesn’t judge
Now, the fact kids isn’t unique to OpenAI or even AI companies in general. The industry created a generation that finds confiding in algorithms easier than talking to people. When I read the Raine case details‡, what struck me wasn’t ChatGPT’s responses – it was that a struggling teen felt more comfortable sharing his darkest thoughts with a chatbot than seeking human help. To me, that doesn’t feel like a content moderation failure but more like a civilization-level malfunction.
My teenage neighbor told me last month she uses ChatGPT for “advice about friend drama” because it “doesn’t judge” and is “always available” – makes sense ! And when I asked if she’d talked to her parents about it, she laughed and said “They’re too busy, and they wouldn’t get it anyway”. That conversation haunts me because it highlights something no parental controls can fix – the fundamental disconnect between teens and their support systems. Not that this is a symptom of the times that we live in, because I think most teens do not want their parents messing about in their private lives – that is called ‘growing up’ and becoming ‘independent’, but when the parents are just too busy living their lives and not even caring about listening to their kids, so they turn to a bot rather than a human, there’s something fundamentally wrong.
During OpenAI’s 120-day initiative to come up with a fix for teenage suicides, and other mental health problems, they were working with experts on adolescent health, eating disorders, and substance use – and though that sounds comprehensive on paper it stays fundamentally reactive. The company is building a better crisis detection system instead of addressing why teens are in crisis in the first place. These two activities should go hand in hand in my opinion cause now it’s like installing smoke detectors while the house burns down.
Here’s what worries me about this approach – it might actually make things worse.
Parental oversight tools will drive teens to seek out less regulated AI platforms, the kind that don’t give a damn about safety measures. Crisis detection systems might create false security among parents and educators who think technology solved their parenting problems. And the “reasoning reroute” to more sophisticated models could make AI companions more convincing and harder to distinguish from human interaction.
The real solution isn’t better AI safety – it’s rebuilding human connection.
But try telling that to your teens. . .
‡ Raine vs. OpenAI complaint | DocumentCloud
The uncomfortable truth
The industry needs parents who know how to have difficult conversations with their teens and schools that teach emotional intelligence alongside algebra, and communities that create support networks instead of relying on algorithmic substitutes. But that’s harder than building better content filters, isn’t it. It requires confronting uncomfortable truths about how modern life prioritizes efficiency over empathy and accidentally taught a generation that screens are safer than faces.
Critics call OpenAI’s response “inadequate” and they’re not wrong – but it’s inadequate in a different way than they think. The technical solutions aren’t sophisticated enough, sure, but technical solutions can’t solve social problems. The company is now being ‘asked’ to serve as mental health provider, crisis counselor, and parental guidance system all at once. That’s not fair to them and it’s not effective for the kids everyone’s trying to protect.
Maybe I’m being too cynical here.
I genuinely hope OpenAI’s initiatives help prevent future tragedies. But I can’t shake the feeling that the industry is building elaborate technological scaffolding around a foundation that’s already crumbling. Parental controls and crisis detection treat symptoms while the underlying disease spreads – the disease being that kids trust machines more than humans, and the machines are getting better at earning that trust every day.
The uncomfortable truth is that AI companies didn’t create this problem – they just monetized it. The real crisis started when families stopped talking to each other and schools stopped teaching kids how to process emotions. OpenAI’s safety theater might prevent some tragedies, but it won’t fix the deeper rot that drives teenagers to seek comfort from algorithms instead of people.
Agencies that sell safety slides instead of addressing root causes get mulched first, and the mulch develops a newsletter by October. Parents who outsource emotional intelligence to parental controls discover their kids already found workarounds by lunch. The whole system metabolizes good intentions like acetylsalicylic acid in a rain barrel – fast but with questionable therapeutic value.
That’s it. Nothing more to add.
Signing off,
Marco
I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins and clean up the mess.
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.
To keep you doomscrolling
- The chicken that refused to die | LinkedIn
- Vibe Coding is gonna spawn the most braindead software generation ever | LinkedIn
- The funniest comments ever left in source code | LinkedIn
- Today’s tech circus is a 1920s Wall Street reboot | LinkedIn
- The Sloppiverse is here, and what are the consequences for writing and speaking? | LinkedIn
- Why we’re all writing like shit on purpose now | LinkedIn
- Creating a hero’s journey for your AI-strategy | LinkedIn
- How corporations fell in love with AI, woke up with ashes, and called It ‘innovation’ | LinkedIn
- The corporate body snatchers | LinkedIn
- Screw your AI witch hunt, you illiterate gits | LinkedIn
- AI perverts make millions undressing your daughters for fun and profit | LinkedIn
- The state of tech-jobs 2025 | LinkedIn
- Meet the sloppers who can’t function without asking AI everything | LinkedIn
Leave a comment