- Deepfake Watch
- Posts
- Deepfake Watch 36
Deepfake Watch 36
AI Chatbot Accused In Teen Suicide Case | Deepfake Watch
Browser View | October 25, 2024 | Subscribe
Before you dive in, a heads-up: This week’s newsletter discusses suicide.
The dangers of emerging tech on teenagers is not a new topic - much has been discussed about platforms like Instagram pushing young minds to self-harming behaviour. Now AI companions are getting popular, and are bringing with them new dangers, especially when it comes to minor users.
Character.AI - a popular AI companion platform has been accused of ignoring the dangers posed by the platform’s chatbot on minor users, after a mother in the United States alleged that the company’s tech drove her 14-year-old son to suicide.
Meanwhile, AI image and video generators are getting more and more realistic, and such content are ending up in the news media, blurring the lines between real and synthetic.
Opt-in to receive the newsletter every Friday.
C.AI in murky waters
Just a month ago, things were looking good for Character.AI - an AI companion provider. The company made a deal with Google for a whopping $2.7 billion for a one-off licence to its tech, and allowing Google to rehire Character.AI co-founders Noam Shazeer and Daniel DeFreitas, who had previously quit Google over a disagreement.
What is Character.AI? It’s a platform that lets you create lifelike AI characters, and promises ‘real personalities’ for you to chat with. These LLM-powered characters can engage in real conversations, tell you stories, and even learn from your interests, and adapt to you over time.
The company is now being sued by Megan Garcia - a mother, who alleges that a chatbot on this platform drove his 14-year-old son to suicide.
Social Media Victims Law Center and the Tech Justice Law Project filed the lawsuit on behalf of Garcia in Florida federal court, alleging “wrongful death and survivorship, negligence, filial loss of consortium, unjust enrichment, and violations of Florida’s Deceptive and Unfair Trade Practices Act.” The lawsuit names Character Technologies, Inc., co-founders Shazeer and DeFrietas, Google and its parent company Alphabet.
According to the lawsuit (which can be accessed here), Garcia’s son Sewell Setzer III shot himself in February, after being allegedly hooked on the platform to a chatbot modelled after the Game of Thrones character Daenerys Targaryen. The lawsuit alleges that the platform is aware of the dangers posed by the app, and has failed to exercise reasonable care while dealing with minor users, and that it has deliberately targeted minors.
It also alleges that the company’s design decisions are meant to “attract user attention, extract their personal data, and keep customers on its product longer than they otherwise would be.” In an interview with journalist Laurie Segall, Garcia mentioned that her son withdrew from those around him, and alienated himself, as he got more and more hooked to the platform.
She also repeats how a company like Character.AI was irresponsible in providing AI companions to teenagers without proper safeguards, and getting them hooked on the platform by leveraging their personal data.
Like in India, social media companies in the US are protected from legal action for content posted on their platforms by Section 230 of the Communications Decency Act. However, when it comes to AI chatbots, the platform providing such tech is directly responsible for the content created, and therefore such protection might not apply.
This is not the first time Character.AI has courted trouble. Earlier this month, an anonymous user created a chatbot based on the likeness of teenage murder victim Jennifer Ann Crecente, who was shot and killed 18 years ago.
Following a tweet by Crecente’s uncle Brian Crecente - who is also the founder of gaming platforms Kotaku and Polygon - the chatbot was removed by Character AI.
White House pushing for govt AI use
The White House published a national security memorandum (NSM) yesterday, directing the Pentagon and other intelligence agencies to boost their AI adoption.
Key directives include bolstering U.S. chip supply chains—vital to powering next-gen AI tech—and supporting diverse AI talent beyond the private sector to keep American AI development dynamic and well-rounded. This includes funding the National AI Research Resource, which allows researchers from universities and smaller companies to develop AI solutions that align with national security goals.
The NSM also pushes for intelligence collection on activities by ‘competitors’ targeting the US as a top priority, and mandates that US agencies supply the country’s AI developers with timely cybersecurity and counterintelligence to protect their innovations.
Lebanese newspaper prints AI Image on front page
Following Israel’s air strikes on Lebanon’s capital Beirut last weekend, social media has been rife with two AI-generated images being falsely shared as one of Israel’s strikes near Beirut airport.
While these images were circulating on the web initially, one of them actually made it to the front page of Lebanese daily Al-Akhbar as a real photo of an airstrike near Beirut airport.
It’s getting easier and easier to create realistic synthetic images, and even videos, and the lines between reality and fiction are further blurred when such content shows up on widely-followed media outlets.
Fact-checker Henk van Ess took to X last weekend to explain how easily he created an AI-generated footage of Volgograd Oil Refinery in Russia being bombed using just a simple image of the refinery and using the prompt “add explosions.”
In a thread, he adds a series of images and adds moving elements using a single prompt each time.
Meta bringing in FRT to fight scammers
After relentless flooding of deepfakes of celebrities promoting scams, Meta is now set to introduce facial recognition technology to fight this menace.
Meta AI’s system would flag suspicious ads and compare them against a database of profile photos of celebrities. A match will lead to the ad getting automatically removed.
The company is also planning to introduce FRT as a method to gain access to locked accounts using video selfies - something which currently requires the uploading of official ID.
While there are privacy concerns (Meta had already introduced and ditched an effort to use FRT in its platforms in 2021), the company assures that such video selfies would be encrypted, and deleted after the verification.
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
↗️ Was this forwarded to you? Subscribe Now
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers
Copyright (C) " target="_blank">unsubscribe