- Deepfake Watch
- Posts
- Deepfake Watch 28
Deepfake Watch 28
South Korea’s Deepfake Porn Crisis | Deepfake Watch
Browser View | August 30, 2024 | Subscribe
Growing misogyny, coupled with easily available deepfaking tools, has polluted South Korea’s cyberspace with highly disturbing non-consensual deepfake pornography. And Telegram is playing a major role.
The arrest of Telegram co-founder Pavel Durov, and the revelation of some extremely disturbing Telegram groups, have reignited the conversation around the risks posed by app - which has become a haven for digital criminals.
As governments around the world, including that of South Korea, try to clamp down on deepfake pornography, Telegram poses a massive challenge in terms of regulating digital criminals while upholding privacy.
Opt-in to receive the newsletter every Friday.
The Curious Case Of Digital Sex Criminality
Earlier this year, a report on Decode highlighted how features like end-to-end encryption and self-destructing messages had made Telegram a fertile ground for cybercriminals to thrive. My colleague Adrija Bose dug in a bit deeper into the muck, and found a community of predators selling child sex abuse videos for Rs. 40 - Rs. 5000.
It’s not surprising that the same platform is now being frequented by AI sex criminals, and South Korea is seeing the worst of it.
Last week, popular Korean feminist X user Queen Archive, with over 120,000 subscribers, received a tip about a pervasive community on Telegram distributing deepfake pornography. She dropped all this information on X, and highlighted a disturbing phenomenon which has exclusively targeted females - including minors.
The country has had its share of troubles with digital sex criminals before. Back in 2018-2019, a group of men gathered on Telegram and carried out coordinated blackmailing of young women, to coerce them into performing sexual acts. This was called the ‘nth-room’ incident, and the group’s leader, Cho Ju-bin, got a 42-year prison sentence.
The same platform has now facilitated the congregation of purveyors of deepfake sex abuse material. Queen Archive told Korean newspaper Chosun Daily that she kept getting notifications all night after going public with the information on the Telegram channel - with victims reaching out with stories she found to be “truly devastating”. One of the channels had over 220,000 active members.
The content she found was stomach-churning. “In one “humiliation room” with 1,932 members, there were sub-rooms targeting their cousins, moms, acquaintances, older sisters, and younger sisters. That was also when I learned about a group targeting female soldiers,” she told the newspaper.
These groups also shared personally identifiable information, including home addresses and phone numbers, making the victims extremely prone to blackmailing.
“The most distressing incident was when a separate room was set up to humiliate a specific victim, sharing their personal details and deepfake videos. Over 1,000 participants contacted the victim, further tormenting them by sharing their reactions,” Queen Archive added. Her X handle @QueenArchive1 has now been suspended.
The country’s media regulator is seeking cooperation from social media platforms to track, delete and block such content, and has also requested French authorities to facilitate communication with Telegram. The country is also mulling over criminalising the purchasing and viewing of non-consensual sexual deepfakes, the government announced earlier today.
Governments around the world have been struggling to keep up with the growing menace of deepfake porn. Many of the perpetrators - creators and distributors, both - were found to be young boys in school, targeting their female peers, presenting a rather tricky situation of juvenile delinquency.
Many countries have now started passing new regulations that criminalise the distribution and possession of non-consensual deepfake porn, while the UK has taken a step further to criminalise the creation of such content as well.
Latest Updates On AI And Deepfakes
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
↗️ Was this forwarded to you? Subscribe Now
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers
Copyright (C) " target="_blank">unsubscribe