Love In The Time Of Deepfakes

In partnership with

AI tools may have many useful applications - but deepfaking is not one of them. Ok, you could argue that deepfakes are useful - but mostly for scammers and sexual deviants - and they have left many people in the lurch. Lately, we’ve been seeing a growing spate of romance scams led by deepfakes, and the results are as costly as they are heartbreaking.

The art of romance-scamming

French broadcaster TF1 aired an unusual story last Sunday. 

A French interior designer named Anne had been conned of €830,000 by her “lover”, who she thought was Brad Pitt. However, Brad Pitt, it wasn't. A scammer used deepfakes to impersonate the Hollywood superstar’s likeness to chat her up, and make her fall in love. 

Anne divorced her husband to be with 'deepfake Brad' and got a €775,000 divorce settlement—which went entirely to the scammer. She later got TF1 to withdraw the segment, after she became the subject of widespread mockery for believing the scammer enough to part with nearly a million euros (INR 7.39 crores).

I’m not here to question Anne’s decisions - we don’t really know what state of mind she was in, and how her decision-making was compromised to such an extent. What we do know is that deepfakes have made it rather easy to dupe people with highly sophisticated attacks.

While Anne’s story is making headlines, due to the large sum of money she lost, she is hardly the only one to fall for the deepfake romance scam. 

Way back in 2018, before the AI boom officially kicked in, manga artist Chikae Ide was duped of $523,200 by a scammer pretending to be Hollywood actor Mark Ruffalo. 

Ide started speaking to deepfake Mark after receiving a message on Facebook. She eventually did video calls with whom she truly believed to be Mark Ruffalo. She later found out that Ruffalo’s likeness had been used using deepfake tech to impersonate him.

Yet another South Korean woman fell in love with a deepfake Elon Musk, and parted with US$ 50,000.

The modus operandi is quite similar. Most romance scams start off as conversations on social media platforms like Facebook. In the case of deepfake Brad and deepfake Mark, they convinced their target that their bank accounts are either frozen, or being monitored, due to divorce proceedings. And they either need money for a treatment, or for a flight ticket or for some investment.

And every time their target raised any doubts, they would send more deepfake images or videos with words of love or sympathy.

Deepfake images of Brad Pitt sent by the scammer

Last year, Meta reportedly banned over 60,000 accounts belonging to a group of scammers from Nigeria called the “Yahoo Boys” from its platforms, who were allegedly experimenting with real-time deepfakes to conduct romance scams and digital sextortions.

More recently, Hong Kong police arrested 31 individuals suspected of running such romance scams. The HK Commercial Crime Bureau told the media last weekend that it had targeted “a local syndicate that was said to have recruited young people as scammers and produced deepfake profiles on online dating sites.” They also added that around HK$ 34 million (US$ 4.3 million) had been lost to such scams in Taiwan, Malaysia and Singapore.

According to the US Federal Trade Commission, nearly US$ 1.3 billion was lost to such scams in 2022. The FTC report mentions that the two most common lies told by the scammers are:

  1. I, or someone I know, is sick, or in jail

  2. I can teach you how to invest and make money

Remember the old adage your parents would tell you when you’re young - be wary of strangers? This applies to everyone, including adults, when it comes to interacting with strangers on social media.

Chat with whoever you want, but as a rule of thumb, never send any money, sensitive data, or personal pictures, to anyone you are not well-acquainted with in real life.

The Dirty Marketplace Of MrDeepFakes

This is that Black Mirror episode where depraved human impulses meet cutting-edge tech.

A recently-published study by researchers at Stanford University and UC San Diego dives into the nitty-gritty of the notorious MrDeepFakes, the internet’s largest marketplace for AI-generated deepfake pornography - with some disturbing findings. 

The researchers used automated tools to scrape metadata from over 43,000 deepfake videos on the platform, analysed 8,198 threads comprising 43,350 forum posts, examined 611,000 public profiles, studied 830 paid requests, and reviewed 1,129 posts from 25 deepfake creation guides.

At the time of data collection, 1,880 users had posted 42,986 sexual deepfake videos, with overall views exceeding 1.5 billion. The study also notes that an average custom-made deepfake sells for around US$ 87.50, fuelling an industry that thrives on exploiting non-consensual visuals that primarily targets women celebrities (88% of the videos examined featured actresses and female musicians).

The platform’s self-regulatory claims were also found to be ineffective. Despite having rules against non-celebrity and abusive content, the study found rampant availability of explicit abuse scenes, requests targeting private individuals, and a disturbing case involving request for child abuse material.

The study also revealed how easy it has become to create realistic deepfakes - with only 3,000-15,000 high-quality facial images required to train such deepfaking AI models. It adds that DeepFaceLab was a preferred tool for many deepfake creators on the platform due to its advanced features and lack of content restrictions. Furthermore, the high hardware requirements of powerful graphics cards were found to be circumvented by using cloud GPU services like Google Colab and AWS, despite Google's ban on deepfake-related notebooks.

The study highlights the growing danger of deepfakes, and the immediate need for global regulations against platforms like MrDeepFakes.

Read the entire study by clicking here.

Message from our sponsor

Start learning AI in 2025

Everyone talks about AI, but no one has the time to learn it. So, we found the easiest way to learn AI in as little time as possible: The Rundown AI.

It's a free AI newsletter that keeps you up-to-date on the latest AI news, and teaches you how to apply it in just 5 minutes a day.

Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.

Was this forwarded to you?

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel