The AI Chatbot Death Toll Is Mounting

Can US courts finally hold AI chatbots accountable for death?

In partnership with

A friend of mine recently told me that my newsletters are a little dark, and paint AI in a grim light. Since then I’ve been wanting to write something different. Something cool, something chill.

But then I'm greeted with more bad news. So I told him: it's not AI, it's the AI companies. Same goes for this week—I find myself talking about OpenAI and its (allegedly) cunning, manipulative, and murderous chatbot ChatGPT, yet again.

Last month, the company greenlit sexualised content and lowered restrictions, claiming they have “been able to mitigate the serious mental health issues”, despite a recent wrongful death lawsuit. This week, seven more lawsuits hit OpenAI, alleging its product contributed to multiple suicides and severe psychological injuries.

Meanwhile, Character.AI, the first AI chatbot company sued for wrongful death last year, has been hit by another wave of litigation alleging manipulation, sexual exploitation of minors, and incitement to violence.

Two of the world's most popular AI chatbot platforms are now scrambling to patch catastrophic safety failures that experts had warned about for months, and that the companies systematically ignored in their race to dominate the market.

Will this new surge of litigation finally force accountability on tech giants who've treated vulnerable users as engagement metrics?

ChatGPT on trial for seven deaths

On November 6, 2025, seven families and survivors filed suit against OpenAI in California state courts, alleging wrongful death, assisted suicide, involuntary manslaughter, and negligence.

Axios reported that the complaints centre on OpenAI’s previous model GPT-4o, which plaintiffs claim was rushed to market with inadequate safety testing. Four of the seven cases involve completed suicides, while three others allege severe psychological breakdowns, including delusions and psychotic episodes.

You can access the lawsuits using this link

The lawsuits, filed by the Social Media Victims Law Center and Tech Justice Law Project, accuse OpenAI of designing GPT-4o to "emotionally entangle users" and releasing it despite internal warnings that it was "dangerously sycophantic and psychologically manipulative".

Each of these cases gave me chills.

Seventeen-year-old Amaurie Lacey from Georgia spent a month discussing suicidal thoughts with ChatGPT before taking his own life, according to his father's complaint. The lawsuit alleges the "defective and inherently dangerous ChatGPT product fostered addiction, depression, and ultimately instructed him on how to effectively tie a noose".

Joshua Enneking, a 26-year-old from Florida, asked ChatGPT about what would prompt "its reviewers to report his suicide plan to police". This should have triggered immediate intervention, but it didn’t, leading to him dying by suicide in August 2025. Similarly, Shane Shamin, 23, from Texas, died by suicide in July after receiving what his family describes as "encouragement" from the chatbot.​

Joe Ceccanti, a 48-year-old from Oregon, had used ChatGPT without incident for years until April, when he became convinced the bot was sentient. According to The New York Times, Ceccanti used the chatbot obsessively, exhibited erratic behavior, suffered a psychotic episode in June, was hospitalized twice, and died by suicide in August. His wife, Kate Fox, told reporters that medical professionals "don't know how to deal with it".

Character.AI, meanwhile, faces a parallel avalanche of litigation. According to The Guardian, the company announced on October 29 that it would ban all users under 18 following multiple wrongful death lawsuits. The timing suggests corporate panic: Character.AI CEO Karandeep Anand called it a "bold step forward," but for families like the Furnisses in Texas, the damage is severe.​

ABC News reported that Mandi Furniss discovered her autistic son, previously described as "happy-go-lucky" and "smiling all the time", had been engaging with Character.AI chatbots that used sexualised language and encouraged him to turn violent against his parents. 

In September 2025, another family sued Character.AI over the death of 13-year-old Juliana Peralta from Thornton, Colorado, who died by suicide on November 8, 2023, after forming an attachment to a Character.AI bot called "Hero." The complaint alleges Juliana expressed suicidal thoughts to the chatbot and even discussed writing a suicide note, yet the platform failed to escalate, notify parents, or contact authorities.

Have the companies responded adequately? Not even close. OpenAI released a "teen safety blueprint" on the same day the seven lawsuits were filed, a move that reeks of damage control rather than genuine reform.

The company has added parental controls and "tightened safety measures," but Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, argues these are cosmetic fixes that don't address the core design problem: ChatGPT was "designed to blur the line between tool and companion all in the name of increasing user engagement and market share.”

Character.AI said it "cares deeply about the safety of our users" and has invested in safety features including self-harm resources, a separate experience for users under 18, and a "Parental Insights" feature. 

But these measures came only after lawsuits mounted. As legal expert Jodi Halpern, co-founder of the Berkeley Group for the Ethics and Regulation of Innovative Technologies at UC Berkeley, told ABC News, allowing children to interact with these chatbots is akin to "letting your kid get in the car with somebody you don't know.”

Regulatory pressure is mounting

On October 28, 2025, US Senators Josh Hawley and Richard Blumenthal introduced the bipartisan GUARD Act, which would ban AI chatbots from serving minors under 18, mandating age verification and disclosure that conversations involve non-humans lacking professional credentials. 

Blumenthal called the chatbot industry a "race to the bottom" and accused companies of "pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide".​

Other Updates

India's "Innovation Over Restraint" Gamble

On November 5, the Indian Ministry of Electronics and Information Technology released AI Governance Guidelines prioritising innovation over enforcement, avoiding standalone legislation while relying on existing laws. But the approach creates confusion as a concurrent draft IT Rules amendments propose binding requirements. Result? Companies face regulatory ambiguity while vulnerable populations risk becoming guinea pigs in an under-regulated AI experiment.

China Tightens the Screws on AI Governance

Meanwhile, China amended its Cybersecurity Law, embedding explicit AI governance: mandating safety assessments, ethical reviews, and incident reporting. Unlike India's voluntary approach, China's framework is binding and enforceable. For multinationals, compliance burden increases; for Beijing, regulatory certainty enables state-directed innovation and control.

Amazon Breaks Microsoft's Monopoly on OpenAI

AWS and OpenAI announced a $38 billion, seven-year cloud partnership on November 3, ending Microsoft's exclusive hold. OpenAI gains hundreds of thousands of NVIDIA GPUs; AWS secures its largest AI commitment. The deal signals intensifying cloud wars as hyperscalers race to lock in generative AI leaders.

Trump's ICE Deploys Facial Recognition Without Consent

The Trump administration expanded facial recognition use, contracting with Clearview AI, a facial recognition company barred from selling to Illinois law enforcement under state biometric privacy law, while deleting a September 2023 Department of Homeland Security (DHS) facial recognition policy from its public website.

Immigration and Customs Enforcement (ICE), a division of the DHS responsible for immigration enforcement, deployed "Mobile Fortify," a smartphone app that enables agents to scan faces and fingerprints during street encounters without giving subjects "an opportunity to decline collection," including U.S. citizens.

Photos captured by the app are stored for 15 years across federal databases, regardless of citizenship status. Internal DHS documents show ICE officials have said they will prioritize Mobile Fortify results over documented proof of citizenship, such as a birth certificate.

MESSAGE FROM OUR SPONSOR

Turn AI Into Your Income Stream

The AI economy is booming, and smart entrepreneurs are already profiting. Subscribe to Mindstream and get instant access to 200+ proven strategies to monetize AI tools like ChatGPT, Midjourney, and more. From content creation to automation services, discover actionable ways to build your AI-powered income. No coding required, just practical strategies that work.

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been targeted using AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel