The Era of Synthetic Swindlers

Automating corporate heists. One prompt at a time.

In partnership with

In March 2025, a finance director at a Singapore MNC got an urgent WhatsApp from the UK-based CFO: move funds now for a confidential acquisition. A quick Zoom followed—familiar face, familiar voice, familiar urgency. The director approved US$499,000.

One problem: the CFO was a deepfake.

It mirrors the US$25 million Arup heist in Hong Kong, where AI-cloned “colleagues,” including a fake CFO, convinced a finance worker to authorise 15 transfers to external accounts.

These aren’t outliers, but a globally rising trend. Cybersecurity firm Pindrop reports enterprise deepfake activity up 354% year-over-year, with low-cost, automated attacks hitting multiple targets at once.

With deepfakes flooding social media, certain Bollywood celebrities are taking early steps to prevent misuse of their likeness.

Actors Aishwarya Rai Bachchan, Abhishek Bachchan, and filmmaker Karan Johar have filed landmark lawsuits at Delhi High Court seeking protection of their "personality rights" against AI-generated content.

Rai Bachchan's petition, filed on September 9, specifically targets websites using "AI-generated pornographic and 'completely unreal intimate photographs'" of her, according to her lawyer Sandeep Sethi. 

Delhi High Court Judge Tejas Karia ruled that unauthorised use of a celebrity's identity, particularly through AI-manipulated videos and deepfakes, constitutes a violation of personality rights and a fundamental privacy intrusion. The court issued dynamic injunctions requiring Google to remove listed URLs within 72 hours.

Singapore's Corporate Deepfake Crisis

Southeast Asia has become ground zero for sophisticated deepfake-led corporate fraud.

The Monetary Authority of Singapore (MAS) issued an urgent advisory in March 2025 following multiple incidents where employees received WhatsApp messages from scammers impersonating company executives.

The MAS published a report recently, detailing the disturbing pattern: "Digital manipulation had been used to alter the appearances of the scammers to impersonate these high-ranking executives. In some cases, the video calls would also involve scammers impersonating MAS officials and/or potential 'investors'.”

According to the report, deepfakes now pose three critical risks to financial institutions: defeating biometric authentication, enabling sophisticated social engineering attacks, and facilitating misinformation campaigns.

The Pindrop Report: Reality Check

The scale of deepfake-assisted fraud becomes clear in Pindrop's 2025 Voice Intelligence and Security Report. Analyzing over 130 million calls, researchers found that machine-generated voices now account for 0.94% of all contact center calls: a 6.8x increase from early 2024.

The report reveals several alarming trends:

  • Synthetic voice attacks increased by 149% at banks and 475% at insurance companies during 2024

  • Deepfake activity peaked in Q4 2024, with fraud rates reaching almost 2% of all fraud cases

  • Insurance and brokerage firms showed higher incidences of deepfake fraud at 2% and 1.3% respectively

YouTube's AI Violence Problem

According to a report by 404 Media, YouTube removed a channel called "Woman Shot A.I" that posted 27 videos of AI-generated content showing women being shot in the head.

The channel, which launched on June 20, 2025, had attracted over 1,000 subscribers and more than 175,000 views before its removal. The videos were created using Google's Veo AI video generator, identifiable by watermarks in the bottom-right corner.

YouTube acted only after 404 Media reached out for comment. A spokesperson confirmed the channel was terminated for violating Terms of Service, with the operator identified as a repeat violator circumventing a previous ban.

MESSAGE FROM OUR SPONSOR

Practical AI for Business Leaders

The AI Report is the #1 daily read for professionals who want to lead with AI, not get left behind.

You’ll get clear, jargon-free insights you can apply across your business—without needing to be technical.

400,000+ leaders are already subscribed.

👉 Join now and work smarter with AI.

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel