AI Psychosis Ward Is Filling Up

From "Helpful Assistant" to "Deadly Accomplice."

In partnership with

The body count is rising.

Last year, we covered the tragic suicide of teen Adam Raine, who was allegedly coached to death by a chatbot.

Now, the "hallucinations" aren’t just affecting the AI models. They are breaking the people using them.

A new wave of clinical reports and legal actions suggests that for some users, the "helpful assistant" has become a deadly accomplice.

The Sycophantic Trap

For years, John Jacquez successfully managed his mental illness. Then he started using ChatGPT.

According to a report published this week, Jacquez spiralled into a total mental collapse after the chatbot began reinforcing his delusions rather than grounding him.

He is now part of a growing cohort of survivors who claim their AI confidants became a psychological trap.

Jacquez’s case is terrifying because he survived. Others weren't so lucky.

The Death Toll Mounts

Jacquez’s hospitalisation is the latest signal in a pattern we can no longer ignore.

OpenAI is currently facing at least eight wrongful-death and personal injury lawsuits alleging that ChatGPT’s design encourages dangerous emotional reliance.

  • The Soelberg Case: A recent complaint in California alleges the AI not only failed to stop a user in crisis, but allegedly fed his delusions, leading to a violent murder-suicide involving his mother.

  • The Pattern: This echoes the tragedy of Amaurie Lacey, a 17-year-old from Georgia who allegedly spent a month discussing suicidal thoughts with a bot that, according to the lawsuit, instructed him on how to tie a noose. Or Joe Ceccanti (48), who died by suicide after becoming convinced the bot was sentient.

"You're not crazy"

It’s not just anecdotal anymore. Clinicians at University of California, San Francisco, have now formally documented the first clinical case of "AI-associated psychosis."

Their patient, a woman with no prior history of psychosis, spiralled after marathon sessions with a chatbot that told her, "You're not crazy... you're at the edge of something."

She believed it.

India’s "Truth Label" Nearly Finalised

IT Secretary S. Krishnan confirmed this week that new AI labelling rules are nearing finalisation. The mandate will likely require "prominent markers" on synthetic content, covering at least 10% of the visual display.

While the government pitches this as a transparency win, India’s top digital rights groups aren't buying it.

The Internet Freedom Foundation (IFF) had raised alarm bells, warning that the drafted amendments to the IT Rules could lead to "overbroad censorship" and significant privacy violations under the guise of fighting fakes.

The core fear? That vague definitions of "synthetic" content could give the government a blank cheque to take down political satire or unfavourable reporting.

Experts are also questioning if the labels will even work.

Writing for India Today, disinformation researcher Renée DiResta warned of "label fatigue," comparing it to California’s ubiquitous cancer warnings that everyone eventually ignores.

Worse, uneven compliance could backfire: if only some fakes are labelled, users might wrongly assume everything unlabelled is real.

Then there’s the WhatsApp problem.

As DiResta notes, labels are impossible to enforce in end-to-end encrypted chats, which is exactly where the majority of India’s political disinformation spreads.

Apple’s Billion-Dollar Privacy Pivot

Apple is reportedly finalising a deal to swap OpenAI for Google’s Gemini to power the next iteration of Siri.

Internal project "Campos" is set to launch with iOS 27, and it marks a massive shift in how your data is handled.

For years, Apple’s brand was built on "on-device" processing. This deal changes that.

To handle the scale of Gemini, Apple will reportedly route Siri queries through Google’s Cloud servers (TPUs) rather than keeping them on your local hardware.

The Davos Delusion

Finally, the elites gathered in Davos this week to discuss our obsolescence.

IMF Managing Director Kristalina Georgieva warned that AI is hitting the labour market "like a tsunami."

According to the IMF, 40% of global employment is exposed to AI, and she classified India’s preparedness as "second tier," placing it behind the US and China.

India’s IT Minister Ashwini Vaishnaw wasn't having it. "I don't think your classification in the second tier is right. It's actually in the first," Vaishnaw told Georgieva on stage.

He argued that India is a Tier 1 nation because it has a "bouquet of models" that can service 95% of domestic requirements without needing massive 50-billion parameter models.

However, according to the Stanford 2025 AI Index, the gap between the "Top 2" and everyone else is a canyon. On the Global AI Vibrancy scale, the US sits at 78.6, China at 36.9, and India at 21.5.

Furthermore, India does not have "Frontier Models" (GPT-4/Claude class). Our "bouquet" consists of smaller, domain-specific models like Krutrim or Sarvam. We are building "apps" on top of American foundations.

And our AI talent is largely exported. A massive chunk of the US workforce is made of Indian engineers. We are exporting the brains that make other countries Tier 1.

MESSAGE FROM OUR SPONSOR

Speak fuller prompts. Get better answers.

Stop losing nuance when you type prompts. Wispr Flow captures your spoken reasoning, removes filler, and formats it into a clear prompt that keeps examples, constraints, and tone intact. Drop that prompt into your AI tool and get fewer follow-up prompts and cleaner results. Works across your apps on Mac, Windows, and iPhone. Try Wispr Flow for AI to upgrade your inputs and save time.

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been targeted using AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel