- Deepfake Watch
- Posts
- Deepfake Watch 29
Deepfake Watch 29
A Million Dollar Music Heist | Deepfake Watch
Browser View | September 06, 2024 | Subscribe
Bot accounts, automated streaming and AI-generated music. That is all it took for Michael Smith to earn $10 million in seven years, until he got busted by the FBI.
After one of the biggest deepfake porn crisis emerged in South Korea, Telegram has apologised for its role in facilitating it. South Korean authorities are scrambling to tackle this rising problem, where most of the perpetrators are minors.
Opt-in to receive the newsletter every Friday.
The Stream Job
In 2017, Michael Smith, a music producer from Cornelius, North Carolina, started creating “bot accounts” on music streaming platforms Spotify, Apple Music, Amazon Music and YouTube Music to stream songs he owned.
Musicians, songwriters and other rights holders are entitled to small royalty payments every time their songs are streamed. Through thousands of bot accounts, Smith generated billions of illegitimate streams for his music.
The music producer was well aware that if any of his songs was streamed too many times by bot accounts, the platforms would get wiser. However, if billions of streams were spread across tens of thousands of songs, it would be harder to detect his ploy. So how do you churn out an endless list of songs?
In 2018, he teamed up with the CEO of an AI Music company and a music promoter on a mission to spew out hundreds and thousands of AI-generated songs. According to a press release by the US Justice Department, in a 2019 email to Smith, the CEO of the AI Music company wrote: “Keep in mind what we’re doing musically here... this is not ‘music,’ it’s ‘instant music’ ;).”
Armed with a steady flow of thousands of songs every week, Smith deployed his bots to accrue over $10 million in royalty payments.
He was finally busted by the FBI, and is being charged with wire fraud and money laundering. “It’s time for Smith to face the music,” said U.S. Attorney Damian Williams.
South Korea’s Deepfake Porn Crisis Continues
The advent of generative AI has now made the ubiquitous selfie a potential threat.
Last week I mentioned how Telegram has been under the spotlight, after the arrest of its cofounder and the emergence of one of the worst digital sex crime epidemics till date, in South Korea.
The promise of anonymity and discretion by Telegram has lured Korea’s budding community of digital misogynists into hundreds of channels, facilitating the targeting of the country’s female internet users with deepfake pornography at an unprecedented scale.
Amidst a nationwide outrage, South Korean authorities announced an investigation into Telegram for its role in the scandal, while seeking cooperation with the French authorities to do so. Days later, the company apologised for how it handled such content on their platform, and has obliged with take down requests from South Korea's Communications Standards Commission.
In an interview to the BBC, South Korean journalist Ko Narin - who broke the story on the deepfake porn scandal - explains how she stumbled upon an operation that was as organised as it was depraved.
She discovered rooms dedicated to specific high school or middle school students, and was shocked to find over 2,000 members in one of the groups dealing with underage sexual deepfakes. Most of the accused being investigated by the police are minors as well.
Currently, creation of non-consensual sexually explicit deepfakes can draw a jail term of up to five years, and up to 50 million won ($37,500) in fine. Following the scandal, the country’s ruling People Power Party has pledged to increase the maximum jail term to seven years.
The crisis has left Korean women removing their selfies from the internet, in a bid to protect their facial data from being exploited by digital misogynists armed with cheap deepfaking tools.
StopNCII
Microsoft recently announced its partnership with StopNCII - an initiative to combat non-consensual intimate imagery on the internet.
It provides a tool that allows victims of revenge porn and deepfakes to create digital fingerprints of their explicit images, which are then used by StopNCII’s partners to remove such content from their platforms.
Other tech platforms to have partnered with Stop NCII are Instagram, Facebook, Threads, TikTok, Pornhub, OnlyFans, Snapchat and Reddit.
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
↗️ Was this forwarded to you? Subscribe Now
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers
Copyright (C) " target="_blank">unsubscribe