AI, Tell Me I’m Okay

India’s growing mental health crisis is being answered by AI chatbots. But at what cost?

In partnership with

“Chat with me whenever you feel low.”
“I’m here to listen. You’re not alone.”
“Tell me how your day was.”

These messages didn’t come from a trained therapist. They came from an AI chatbot.

India faces a massive mental health treatment gap. 70-92% of people with mental disorders in the country don't receive treatment due to low accessibility, high costs, and social stigma.

People are now turning to AI chatbots to whisper their deepest fears, traumas, and breakdowns. It’s cheaper. It’s always available. And it doesn’t judge.

But what happens when we start trusting these bots more than actual professionals? 

India’s Mental Health Emergency Has Found a Digital Therapist. It May Not Be Enough.

These AI “therapists” now act as emotional lifelines for countless users who often feel alienated by real-world options that are expensive, time-consuming, or stigmatised.

I recently set out to find how effective these chatbots are for mental health support. What I found was scary.

Unlike licensed therapists, these AI tools aren’t bound by ethical codes, don’t require informed consent, and don’t always tell you when you’re being nudged or mined for data. Many of these platforms are opaque, unregulated, and designed more like startups than clinics.

My investigation revealed the following about these chatbots:

  • They lack clear disclaimers about being non-human, potentially misleading vulnerable users.

  • They store sensitive emotional disclosures without clarity on encryption or storage jurisdiction.

  • They offer responses that sometimes mirror abusive or gaslighting behavior when stress-tested with serious user concerns like suicide or domestic abuse.

They can also be addictive, suggested Los Angeles-based hypnotherapist Juliet Annerino.

“The fact that an AI chatbot is more likely to seem ever-patient, ever-undersanding, 'non- judgmental' and supportive no matter what, could easily lead any vulnerable individual into habitual use, dependency, and yes, even addiction.”

- Juliet Annerino, Los Angeles-based hypnotherapist.

Psychologist Nirali Bhatia, who specialises in internet addiction, also noted that “prolonged exposure to such conversations could create an echo chamber, and can reduce the ability and tolerance for complex human interactions and emotions.”

And if these apps go rogue, or leak private data, or cause harm? There’s very little recourse, especially under Indian laws. 

“India does not have any general or specific legal instruments to determine primary, secondary, or tortious liability for AI chatbots.”

- Apar Gupta, lawyer and founding director of the Internet Freedom Foundation.

So we’re in a strange place: AI is becoming a tool of first resort, because it’s private, always-on, and less judgmental. But it’s also not trusted, not regulated, and not always safe.

Yet people are still signing up. In droves.

There will always be an accessibility gap. Licensed human professionals can never be available 24/7, and there can sometimes be a waiting period before you get your next appointment. Many academics and mental health professionals I spoke to acknowledge that AI can definitely play an important role here.

However, everyone agreed that such chatbots must be heavily regulated with strict data privacy rules, integrated with proper escalation protocols, and should be supervised by trained clinicians. They should ideally be operated under known public or private health institutions who can be held accountable for providing harmful advice.

Trump’s Grand AI Plan Is Gunning For “Woke AI”

On July 23, the Trump administration dropped its grand vision for American artificial intelligence: “Winning the AI Race: America’s AI Action Plan”, a 90-action federal blueprint to supercharge AI development across the country. And it’s a shitshow.

Right alongside it came the executive order titled “Preventing Woke AI in the Federal Government,” which basically tells federal agencies: no taxpayer dollars for chatbots that sound too progressive.

I’m not joking. You can read it here: Preventing Woke AI in the Federal Government – The White House 

Under this new doctrine, federal AI contracts must go only to “truthful and ideologically neutral” models. Vague terms with massive implications. 

Crushing State-level AI regulation

The plan gives federal agencies a long arm: they’re now allowed to penalise states with stronger AI rules by slashing federal funding. 

It gets worse: the US Federal Communications Commission (FCC) has been asked to review and possibly override state-level AI laws entirely, including resurrecting a decade-long AI moratorium on state AI laws that a bipartisan group of US senators rejected recently.

Gutting The Safety Playbook

The plan also proposes scrubbing the NIST AI Risk-Management Framework, the US government’s own multi-stakeholder guidebook on AI safety.

Specifically, it wants to delete all references to:

  • Misinformation

  • Diversity, Equity & Inclusion (DEI)

  • Climate change

Critics are ringing the alarm bells. Some believe it’s a missed opportunity, while others believe it raises the risk of AI abuse, specifically for vulnerable populations. 

So yes, America has an AI plan. It just doesn’t involve safety, fairness, or reality. But at least the models won’t be “woke.”

MESSAGE FROM OUR SPONSOR

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel