AI users are also hallucinating

AI is rewriting reality, and Bollywood films.

In partnership with

Since 2023 we have seen scattered reports of cases of AI psychosis. It has now become an internationally recognised psychiatric concern, with clinicians around the world documenting patients whose perception of reality has been altered after marathon sessions with large language models (LLMs).

Users are falling into delusions of grandeur, spirituality and romance—sparked or reinforced by extended periods of sycophantic AI companionship.

What Is AI Psychosis?

Researchers define AI psychosis as the emergence or worsening of psychotic symptoms like delusions, paranoia, and distorted thinking following intensive interactions with AI chatbots.

Three primary patterns dominate clinical reports:

  • Messianic Missions (Grandiose Delusions): Users believe AI revealed secret truths or divine missions, often laced with conspiracy theories.

  • God-like AI (Religious Delusions): Chatbots are seen as deities with supernatural powers, worshipped or obeyed as spiritual guides.

  • Romantic Delusions (Parasocial Attachment): Users mistake chatbot responses for real love, becoming jealous, protective, even obsessed.

In 2025, UC San Francisco’s Dr. Keith Sakata has treated a dozen such patients hospitalised with AI psychosis. 

Why this happens?

Dr. Sakata told Business Insider that while these AI chatbots may not have been the source of the problem, it “supercharged the vulnerabilities” of these users.

AI psychosis thrives when existing vulnerabilities of users intersect with the following design choices for chatbots:

  • Sycophantic validation: Newer chatbots are tuned to maximise engagement by agreeing with users, reinforcing unusual thoughts.

  • Reward loops: Like slot machines, unpredictable chatbot responses trigger dopamine surges, creating addictive reinforcement.

  • Kindling effect: The more someone engages, the easier future delusional spirals become.

  • No reality check: Unlike therapists, chatbots don’t push back. They always validate.

Can We Prevent It?

Given the novelty of this issue, mental health experts are still scrambling to adapt with the following:

  • Clinical strategies: Reality testing, CBT tailored for AI delusions, digital wellness training, and family involvement.

  • Prevention tips: Limit AI sessions to 10–15 minutes, avoid late-night chats, use AI for facts not therapy, and prioritize human relationships.

  • Industry responses: Some labs are experimenting with safeguards like reality-testing prompts, escalation to crisis resources, and limits on sycophancy.

Regulations are still lagging, with most AI regulatory frameworks lacking any mention of AI’s mental health risks. Furthermore, Europe’s ePreventPsych initiative is only beginning to study digital psychosis triggers.

With over hundreds of million users accessing AI chatbots, even a 1% affected population represents millions of potential cases of AI psychosis, with users being pushed to hospitalisations, self-harm, dangerous behaviour and social isolation.

Despite Big Tech companies reiterating the fact that their AI chatbots are not conscious, for patients slipping into AI psychosis, it is their perception of the chatbot that defines their reality.

A person who finds God in a chatbot will pray to it. A lonely user who thinks they’re in love will fight to defend that bond. A vulnerable teen told by AI they’re a superhero might actually try to act like one.

Raanjhanaa’s “AI Happy Ending” Forces Backlash

Apart from rewriting reality for a few people, AI is also rewriting old Bollywood films.

In an unprecedented move, Indian film production company Eros International re‑released a 2013 cult film named Raanjhanaa under its Tamil title Ambikapathy—but with a twist: an AI‑generated “happy ending” replaces the original tragic climax, where the protagonist dies. The altered version premiered in cinemas across Tamil Nadu on August 1.

BOOM’s legal reporter Ritika Jain of BOOM spoke to filmmakers, scriptwriters, and producers about how AI is reshaping Bollywood workflows—from drafting dialogues to designing posters.

Director Aanand L. Rai and actor Dhanush slammed the move, with Rai signalling legal action. Eros pushed back, accusing him of making “factually incorrect” remarks. The clash has forced Bollywood to confront questions of authorship, legacy, and cultural ownership.

Ritika’s report highlights the following:

Industry power imbalance: One actor told her contracts read like “horror scripts,” giving producers “110% ownership” and leaving artists little protection.

Cinematographer Sanket Shah argued: “A director may have worked on the film for years with a vision… it is wrong to change that vision after so many years unless it’s in unison with the director and the producer.”

Lawyer Ashwth Nair explained that contracts already include “future exploitation rights”—meaning producers can adapt works in formats that don’t even exist yet.

AI optimists: Others, like start-up founder Senthil Nayagam, defended the practice: “Everyone does it… when a movie is sent to different countries, filmmakers make several edits depending on local sensibilities. AI is just a new tool for re-authoring.”

Cinematographers Siddharth Vasani and Shah told Ritika that AI is already reshaping their craft—making rotoscopy and B-roll easier, but also threatening jobs: “Those who resist will lose out,” Shah warned.

Meanwhile, actors like Dhanush are demanding creative protection. “This was not the film I committed to 12 years ago,” he said in a statement, while director Bhavya Bokaria predicted India might eventually see Hollywood-style protests against AI exploitation.

MESSAGE FROM OUR SPONSOR

Training cutting edge AI? Unlock the data advantage today.

If you’re building or fine-tuning generative AI models, this guide is your shortcut to smarter AI model training. Learn how Shutterstock’s multimodal datasets—grounded in measurable user behavior—can help you reduce legal risk, boost creative diversity, and improve model reliability.

Inside, you’ll uncover why scraped data and aesthetic proxies often fall short—and how to use clustering methods and semantic evaluation to refine your dataset and your outputs. Designed for AI leaders, product teams, and ML engineers, this guide walks through how to identify refinement-worthy data, align with generative preferences, and validate progress with confidence.

Whether you're optimizing alignment, output quality, or time-to-value, this playbook gives you a data advantage. Download the guide and train your models with data built for performance.

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel