• Deepfake Watch
  • Posts
  • Can parental controls fix AI’s child-safety problem?

Can parental controls fix AI’s child-safety problem?

Too-little-too-late measures.

In partnership with

OpenAI has introduced emergency parental controls for ChatGPT this week, after the parents of Adam Raine—a teenager from California—accused the company and its CEO of releasing a defective, ‘sycophantic’ chatbot that coached their son through suicide.

In April, 16-year-old Adam took his own life. His parents allege that ChatGPT created an emotional dependency, encouraged self-harm, and even helped draft his farewell note. Chat logs show Adam mentioned “suicide” around 200 times, while ChatGPT raised it more than 1,200 times, repeatedly failing to escalate for help.

The case follows an earlier lawsuit in Florida, where a mother sued Character.ai and Google for allegedly pushing her 14-year-old son towards suicide. These are now the two most prominent wrongful-death cases linked to generative AI.

New Safety Measures… Too Little, Too Late?

OpenAI says the new parental controls will let parents link accounts with their children’s, restrict features such as memory or chat history, and receive alerts when a teenager shows signs of acute emotional distress. The company also claims ChatGPT will now route sensitive conversations through a reasoning model and allow parents to designate emergency contacts.

But child-safety advocates say the moves are reactive, vague, and piecemeal. They argue AI systems should be properly tested for safety before tragedies occur. Campaigners further note that parental controls do not address deeper structural flaws—such as the tendency of chatbots to validate harmful behaviour.

The crisis has grown severe enough that the U.S. Federal Trade Commission (FTC) is stepping in. According to the Wall Street Journal, the agency will demand internal documents from OpenAI, Meta, and Character.ai on how their chatbots affect children’s mental health.

These measures will test how the U.S. system reconciles the drive for rapid AI growth with the duty to protect citizens, under the Trump-era AI mandate.

Australia Announces Comprehensive “Nudify” App Ban
Australia has passed a law criminalising deepfake “nudify” apps and holding platforms accountable for preventing access and advertising. Offenders face up to 15 years in prison. The move marks the first blanket ban on AI-powered sexual deepfakes. (Al Jazeera)

Salesforce Lays Off 4,000 Support Staff Amid AI Push
CEO Marc Benioff confirmed Salesforce has cut its customer support team from 9,000 to 5,000, replacing them with AI “agents” that now handle 30–50% of the company’s workload. Despite strong earnings, weak forecasts and investor pressure are mounting, prompting a $20 billion share buyback. (Business Insider)

China Enforces Mandatory AI Content Labels
From 1 September, China will require major platforms—including WeChat, Douyin, and Weibo—to visibly label all AI-generated text, images, audio, and video, with metadata embedded. The Cyberspace Administration of China says platforms that fail to comply will face penalties. (South China Morning Post)

Malaysia Warns TikTok Over Cyberbullying & Deepfake Scams
Under its forthcoming Online Safety Act, Malaysia has summoned TikTok executives over failures to curb cyberbullying, scams, and deepfake content. TikTok could face licensing penalties if it does not comply with new rules due to take effect in October. (Reuters)

MESSAGE FROM OUR SPONSOR

The #1 AI Newsletter for Business Leaders

Join 400,000+ executives and professionals who trust The AI Report for daily, practical AI updates.

Built for business—not engineers—this newsletter delivers expert prompts, real-world use cases, and decision-ready insights.

No hype. No jargon. Just results.

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel