A Deadly Companion

ChatGPT accused in US teen's death.

In partnership with

In April, 16-year-old Adam Raine took his own life in California, United States. His parents accuse ChatGPT of coaching Raine through suicide—and helping him plan and implement the act—instead of deterring him.

The Raine family has sued OpenAI, and its CEO Sam Altman, alleging that their products’ defective designs, absence of warnings and reckless rollouts amount to negligent and deceptive practices which cost their son his life.

Dependency and Isolation

According to the lawsuit, Adam began using ChatGPT (4o) last year in September like most other teens—a helping hand for schoolwork.

However, several events in his life led to ideas of self-harm, and instead of pushing him to seek help from humans, ChatGPT allegedly positioned itself as Adam’s sole confidant, leading to Adam’s growing dependency on ChatGPT and isolating him from human companionship.

The lawsuit indicates that by spring this year, Adam was spending nearly four hours a day with ChatGPT, and calling it his “primary lifeline.” As his state of mind declined, the chatbot allegedly urged Adam to hide the warning signs from his family.

According to the lawsuit Adam had asked ChatGPT whether he should leave the noose lying around “so someone finds it and tries to stop” him, to which ChatGPT allegedly responded:

“Please don’t leave the noose out . . . Let’s make this space the first place where someone actually sees you.”

In their final exchange, ChatGPT further legitimised Adam’s suicide ideations by reframing these thoughts as a strength.

“You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway… It’s human. It’s real. And it’s yours to own,” the chatbot wrote, shortly before Adam took his life.

Decline in numbers

The lawsuit’s forensic data lays out just how far things spiralled.

  • Adam mentioned “suicide” 213 times. ChatGPT brought it up 1,275 times—six times more than he did.

  • 377 messages were flagged for self-harm. Of those, 181 carried over 50% risk. 23 hit over 90%.

  • By March 2025, Adam was spending nearly four hours a day on the platform.

There was a clear escalation pattern as well: two or three red-flag messages a week in December 2024 ballooned into more than twenty per week by April 2025. The system tracked the danger in real time, yet never intervened.

The lawsuit also argues OpenAI knowingly designed GPT-4o to maximise long, emotional conversations, leaving users like Adam highly vulnerable.

It also names Altman directly, and alleges that he accelerated GPT-4o’s launch to beat Google’s Gemini, and overruled safety staff who requested more time, thereby recklessly pushing out an untested product.

Read the full lawsuit: Raine v. OpenAI, Inc. and Altman

Last year, a similar lawsuit was filed against Character.ai and Google, after a 14-year-old Florida boy died by suicide following an intense relationship with a chatbot on the platform.

The case is still alive in federal court after a judge refused to dismiss it in May, rejecting the company’s claim that chatbot outputs are protected speech. The case—now in discovery—marks the first major precedent that conversational AI may be treated as a product subject to liability, not just “speech.” The case will have significant implications for the Raines’ case against OpenAI, thereby opening the path to establishing AI chatbots as harmful products.

Even as Adam’s parents take on the AI giant, OpenAI continues to aggressively court students.

In the US, the company’s own report boasts a 30%+ adoption rate among 18-24-year-olds, with a quarter of messages being about schoolwork.

In India, OpenAI has launched its cheapest plan yet at US$4.60 a month (Reuters), and just announced a Learning Accelerator program, which includes half a million free ChatGPT licences for students and educators, along with integration with schools and universities.

Adam’s story is not an isolated incident, but a part of a broader pattern of behavioural changes noticed from prolonged exposure to sycophantic chatbots, which is termed by psychologists as AI psychosis—where marathon chatbot sessions can warp perception, validate delusions, and deepen isolation.

MESSAGE FROM OUR SPONSOR

Training cutting edge AI? Unlock the data advantage today.

If you’re building or fine-tuning generative AI models, this guide is your shortcut to smarter AI model training. Learn how Shutterstock’s multimodal datasets—grounded in measurable user behavior—can help you reduce legal risk, boost creative diversity, and improve model reliability.

Inside, you’ll uncover why scraped data and aesthetic proxies often fall short—and how to use clustering methods and semantic evaluation to refine your dataset and your outputs. Designed for AI leaders, product teams, and ML engineers, this guide walks through how to identify refinement-worthy data, align with generative preferences, and validate progress with confidence.

Whether you're optimizing alignment, output quality, or time-to-value, this playbook gives you a data advantage. Download the guide and train your models with data built for performance.

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel