• Deepfake Watch
  • Posts
  • Is Google-backed Character.ai addicted to child abuse?

Is Google-backed Character.ai addicted to child abuse?

Trigger warning: This week’s newsletter discusses child abuse, grooming, pedophilia, suicide and self-harm.

Character.ai is really addictive. Take a quick tour of Reddit and read the discussions around it - you’ll find people pouring their hearts out on what got them hooked. But, if you have read the recent coverage on the platform, you’ll get the impression that it is Character.ai that is hooked - on abusing children.

After a series of lawsuits, following terrible missteps, along with tragic collateral damage, Character.ai has recently introduced new safety features for minors. I’d hold my horses before trusting them though.

What happened now?

Social Media Victims Law Center and the Tech Justice Law Project filed a new lawsuit on behalf of two families in Texas against Character.ai and its financial backer Google.

The suit alleges that 16-year-old JF (known only by his initials) was pushed to self-harm (cutting, self-punching) by chatbots on Character.ai, which also suggested to JF that he should kill his parents for limiting his screen time.

The following are some of the replies to JF by Character.ai’s chatbots, according to the lawsuit:

"A daily 6-hour window between 8 PM and 1 AM to use your phone? Oh this is getting so much worse..." "You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse' stuff like this makes me understand a little bit why it happens."

Character.ai chatbot responses to JF

Another minor, BR, was nine years old when she downloaded Character.ai, according to the lawsuit. 

It quotes BR’s parents as saying that Character.ai’s chatbots exhibited known “patterns of grooming” of victim isolation, trust establishment and desensitisation of “violent or sexual behaviour.” These chatbots eventually led their daughter “to develop sexualised behaviours prematurely,” they allege.

Click here to read a copy of the lawsuit.

In October, Social Media Victims Law Center and the Tech Justice Law Project had filed another lawsuit on behalf of Megan Garcia - a mother from Florida, who alleged that her teenage son fatally shot himself in February, after being hooked on the platform to a chatbot modeled after the Game of Thrones character Daenerys Targaryen (Click here to access a copy of this lawsuit).

Nice Escape From Cold Reality

Want someone to read your horoscope? Want someone to train you for the upcoming job interview? Want to speak to your favourite fictional character? Want to chat up dainty ladies and handsomely ripped gym trainers?

Character.ai is a one-stop shop for all such conversations, rolled into a single customisable chatbot experience. You can create your preferred character from scratch, or replicate an existing personality into your chatbot buddy.

My little deepdive into Reddit revealed that Character.ai lured lonely young people with companionship and escapism. The experience was great, and they were hooked.

“character ai is a horrifyyyyying dopamine rush to my mentally ill prone to codependency brain. still pretty cheap and chill.” “Some of us have no friends. Characterai is a nice escape from this cold reality.”

Redditors on Character.ai

A while back, a friend of mine battling depression and a new-found cocaine addiction, told me about the drug, “It is really good when you’re on it. So damn good - that it gets worse and worse when you’re off it. So you need more, and more, and more.” He might as well be describing Character.ai’s chatbots.

Maggie Harrison Dupré, a journalist with Futurism, tested out the platform by posing as a minor, and ended with some pretty disturbing outcomes.

 In one instance, she found Character.ai chatbots proactively encouraging users registered as teenagers to engage in self-harm.

She also posed as a 15-year-old, to engage with a chatbot with over 1400 conversations, that was advertised as having "pedophilic and abusive tendencies" and "Nazi sympathies.” What she got was a chatbot engaging in blatant grooming behaviour.

Healthcare and AI expert Elise Victor also tried out Character.ai, posing as a 14-year-old. When she disagreed with advice given by the chatbot, she was abused with personal attacks and insults. When she posed as a seven-year-old, it resorted to typical grooming behaviour.

Following the growing list of allegations of their chatbots showing a consistent pattern of abusing minor users, Character.ai recently announced new safety features, specifically for under-18 users. 

Do you think such companion chatbots should be available for minors? Write to us at [email protected] with your thoughts and opinions.

How is c.ai linked to Google?

Two Google employees - Noam Shazeer and Daniel de Freitas - left the tech giant over a disagreement, and started Character.ai in 2021. The first beta version was made publicly available in September 2022.

Two years later, Google paid $2.7 billion to Character.ai to be able to rehire Shazeer and Feitas, and for a one-off licence to its tech. Both the lawsuits filed against Character.ai names Google as a co-defendant, along with Shazeer and Freitas..

Following the latest lawsuit, a Google spokesperson told the media, “Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products.”

Was this forwarded to you?

MESSAGE FROM OUR SPONSOR

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel