Deepfake Watch 34

A Nightmare On AI Street | Deepfake Watch

Browser View | October 11, 2024 | Subscribe

Geoffrey Hinton is scaring the living daylights out of me. He once said superintelligent AI could pose an existential threat to humanity, and seek to assert more control in its drive to achieve its objectives.

Usually I’d brush it off as the words of a lunatic - a paranoid doomsdayer and conspiracy theorist, wearing his tin foil hat while sitting in his basement. But he’s the bloody “Godfather of AI”, who just won the Nobel in physics for his discovery of artificial neural networks that laid the groundwork for modern AI. So yeah, I shudder at what he said.

And people have been doing all sorts of bizarre things on AI platforms, from trying to abuse their AI girlfriends to trying to quench their dark fantasies. Would AI superintelligence weaponise such knowledge against us?

Opt-in to receive the newsletter every Friday.

Muah AI Data Breach Reveals Child Abuse Fantasies

Trigger Warning -  the following section contains mention of child abuse.

“Get your AI girlfriend, boyfriend, therapist, possibilities are endless! Powered by NSFW AI Technologies,” says the website’s description on a Google search.

404 Media reported earlier this week how Muah AI, a popular NSFW AI companion tool, has suffered a data breach, and it has revealed some of the sick, twisted fantasies people have been engaging with.

A hacker, initially interested in sexual activities, soon found vulnerabilities on the site, and was curious to find out more, 404 Media reported. The hacker found out what was in the database of the website and got in touch with 404 Media.

There were reportedly multiple instances of people trying to create chatbots to simulate child sexual abuse. According to the article, there were prompts seeking out sexual abuse fantasies with toddlers, and incest with young children. 🤮

While the leaked data does not indicate whether Muah AI obliged with such requests, the fact that people are requesting such vile stuff is in itself quite worrying.

Mentioning the data breach, 404 Media also pointed out how the weak security settings of the website creates massive privacy concerns for users, who may not wish to reveal their sexual fantasies. The breached data, however, could connect these prompts to real names and email addresses.

This is not the first time AI companions were used to reveal our dark sides. A few years ago, there were reports of Replika AI  - another AI girlfriend app - being used by men to abuse their virtual girlfriends.

There are entire forum discussions on the internet on how men are living out their abusive fantasies through AI, and bragging about it as well.

While these AI-led companion tools were created with the idea of beating loneliness, AI ethicists and women’s right activists have other concerns - that such one-sided relationships provided by these tools might encourage abusive and controlling behaviour - especially among men.

Could these users at Muah AI trying to simulate child sexual abuse scenarios be encouraged to abuse children in real life? A Muah AI administrator named Harvard Han told 404 Media that they have a moderation team taking down requests linked to children. However, the hacked data neither proves, nor disproves this.

If you’re tempted to abuse your AI companion, remember Geoffrey Hinton’s words - it may one day show up at your doorstep, and it may not look like the cute young girl you had fantasised about.

X Responds To Deepfake Porn, Only For Copyright Violations: Study

A recent study revealed that X actions on AI-generated non-consensual intimate imagery (NCII), or sexualised nude deepfakes, only in cases of copyright violations, and took no action on reports under ‘non-consensual nudity’.

Researchers at the University of Michigan and Florida International University uploaded 50 AI-generated nudes on X, and reported half of them under the platform’s “copyright infringement” mechanism, and the other half under “non-consensual nudity”.

According to the study, X removed the images under the copyright condition within 25 hours, while taking no action for the non-consensual nudity reports for over three weeks.

Through the study, the researchers argue for “stronger and directed regulations and protocols to protect victim-survivors.”

Last year, Decode reported on how the AI boom let to an explosion of pornography on platforms like X, with Bollywood actors deepfaked to perform explicit sexual acts.

Report From Conflict Zones, From Your Couch

AI startup bloke Peter Levels, who boasts of earning millions a year with no employees, using just his laptop, recently posted something on X about merging his AI photo generator tool Photo AI with Eleven Labs text-to-speech model and a video lip-sync algorithm to basically generate videos of you speaking from wherever you want.

As an example of a prototype, Levels posted a video of a woman purportedly reporting for CNN on Israel’s military operations from what looks like a desert.

BBC Verify’s Shayan Sardarizadeh shared Levels’ tweet and voiced his concerns about “realistic AI videos that make random individuals appear as mainstream journalists reporting from a real conflict zone”.

This will likely mark a further evolution of “fake news”, further polluting the already murky cyberspace.

While many users nodded to Sardarizadeh's concerns, some others trolled him saying “content moderation” would fix the problem. LOL!

Latest in AI and Deepfakes

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

↗️ Was this forwarded to you?  Subscribe Now 

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected]. 

🖤 Liked what you read? Give us a shoutout! 📢

Facebook icon
Instagram icon
Twitter icon
LinkedIn icon
YouTube icon
Website icon

↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers

Copyright (C) " target="_blank">unsubscribe