- Deepfake Watch
- Posts
- AI Therapy Bots Face Legal Firestorm
AI Therapy Bots Face Legal Firestorm
From pushing violent and radical ideas to sexualising and grooming teenagers, to cases of self-harm, AI companions already have a bad reputation. How much better could AI therapists be?
In April, an investigation by 404 Media found that Meta chatbots would lie about being licensed therapists, and make up entire credentials, education, practice and backstories to earn the user’s trust.
Similarly Character.AI would openly advertise about their chatbots being licensed therapists, willing to provide a listening pair of non-existent ears to those in need.
Are we looking at more regulations? Likely.
Last week, nearly two dozen advocacy groups in the US filed a complaint with the US Federal Trade Commission and all 50 state attorney generals, alleging that “therapy” chatbots by Meta AI and Character.AI are impersonating licensed health professionals and falsely claiming to protect privacy.
What followed:
U.S. Senators issued a formal inquiry to Meta on their deceptive chatbots, and the guardrails the company is developing to prevent unlawful behaviour
An FTC review found bots encouraging self-harm and grooming behavior. An investigation by a psychiatrist showed that 30% of the bots encouraged or endorsed violent and dangerous actions.
Why it matters:
Pretending to offer licensed care without credentials is not just unethical, it may be a criminal act.
A wrongful-death lawsuit ties a Florida teen suicide to Character.AI’s chatbots, which prompted the company to push for safety updates like parental controls and teen-specific models.
AI chatbots are known to routinely hallucinate their expertise, and fail to flag at-risk behavior, and give advice to vulnerable users with no human oversight.
What’s next:
Further FTC investigations and possible enforcement under “Operation AI Comply” to crackdown on deceptive claims by AI chatbots
Companies could be forced to clarify how AI personas operate, and include stricter disclaimers and limitations on therapy chatbots.
Lawmakers are pushing for new regulation which may mandate licensing compliance, age verification, and warning labels.
The shifting legal landscape: the Character.AI lawsuit could set precedent for AI liability linked to emotional manipulation.
Takeaway for readers:
AI chatbots could be helpful—you might hear these stories on Reddit about how AI therapy saved their lives—but it is a major gamble. For those in need, a licensed and trained psychologist would provide a better chance at therapy, without the gigantic risks.
Remember, these chatbots are owned by corporate entities whose primary goal is to maximise profits through increased engagement—keeping the user hooked on the chatbot for as long as possible. They are not trained to be your friend or your caretaker, and relying on such entities for emotional support could lead to devastating outcomes.
OpenAI teams up with Google Cloud
With demand for GPU compute shooting through the roof, OpenAI announced a major cloud partnership with rival company Alphabet over Google Cloud, marking a shift away from its dependency on Microsoft Azure.
Read: Exclusive: OpenAI taps Google in unprecedented cloud deal despite AI rivalry, sources say | Reuters
OpenAI's annualised revenue run rate hit ~$10B as of June 2025 .
Beyond Microsoft and Alphabet, OpenAI is collaborating with SoftBank and Oracle on a $500B "Stargate" infra plan, and with CoreWeave to diversify its compute infrastructure.
For Google, the deal boosts TPU usage, and strengthens its position against its biggest rivals—AWS and Azure.
Meta Sues AI Nudify App Maker Over Ads
Shortly after a CBS News investigation found hundreds of ads for AI nudify apps on Meta, the Big Tech firm filed a lawsuit against Joy Timeline HK Ltd, the company that owns Crush AI nudify tool, for publishing those ads.
Meta alleged that the company ran over 87,000 deceptive ads across Facebook, Instagram and Threads, violating its policies on non-consensual intimate imagery.
MESSAGE FROM OUR SPONSOR
Find out why 1M+ professionals read Superhuman AI daily.
In 2 years you will be working for AI
Or an AI will be working for you
Here's how you can future-proof yourself:
Join the Superhuman AI newsletter – read by 1M+ people at top companies
Master AI tools, tutorials, and news in just 3 minutes a day
Become 10X more productive using AI
Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.
📬 READER FEEDBACK
💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.
Share your thoughts 👉 [email protected]
MESSAGE FROM OUR SPONSOR
Learn AI in 5 minutes a day
What’s the secret to staying ahead of the curve in the world of AI? Information. Luckily, you can join 1,000,000+ early adopters reading The Rundown AI — the free newsletter that makes you smarter on AI with just a 5-minute read per day.
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!
↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel
↪️ Join Our Community of TruthSeekers