Deepfake Watch 1

Seeing Is Misleading: The era of cutting-edge deception

Browser View | February 2, 2023 | Subscribe

If 2023 was the year of artificial intelligence, 2024 is gearing up to be the year of digital deception!

A large pile of AI-powered tools have now become readily available at affordable prices, and they are being put to misuse, in a largely unchecked environment. Highly realistic images and videos can now be fabricated with a few text prompts, and individuals can be made to say whatever you want them to.

We’re in the age of ‘deepfakes’ - a portmanteau of ‘deep learning’ and ‘fake’ - and seeing is believing no more!

We at Decode decided to document the rapidly growing realm of AI and deepfakes, highlight the cases of abuse, and also provide AI-powered solutions to counter these issues, in the form of a concise weekly newsletter.

Opt-in to receive the newsletter every Friday.

The Taylor Swift Case: What Women Don’t Want

X, formerly known as Twitter, was recently flooded with AI generated nudes of American pop star Taylor Swift. This is worrying for many women - who can become easy targets for faceless, nameless online trolls and bullies.

This case caught the attention of Swift’s dedicated fanbase, who fought back relentlessly and got these creepy images removed, and also led to the White House press secretary Karine Jean-Pierre taking heed. “We’re going to do what we can to deal with this issue,” she said at a press briefing.

But what can we do, really? For now, the options are limited.

Mia Shah-Dand, New York-based entrepreneur and founder of Women in AI Ethics, told Decode that such deepfake videos have doubled since 2018, and that “governments are already late" in addressing it.

She strongly stressed on the “need to expand current laws for protection of women’s rights in the AI age and institute serious penalties for those building these harmful tools and for platforms distributing content that puts women at risk”.

A New Era Of Smear Campaign

Politicians are yet another easy target of deepfakes, and its consequences can be quite devastating for democratic integrity. 

Even the US President Joe Biden was not left out. A fraudulent robocall from a person whose voice sounds just like Biden’s went around the US urging voters to not vote for the presidential primaries.

This is a worrying start to the ‘biggest year in election history’ (more than 60 countries are headed to polls in 2024), and a highly prolific time for disinformation factories armed with snazzy new AI-powered tools.

India, ranking first in the list of countries at risk of disinformation in a recent WEF report, is expected to see one of the worst impacts of digital deception.

Bangladesh-based media researcher and fact-checker Sumon Rahman told us that "the nature of political participation, financial investment, communal tensions and caste politics etc. all together will make India a fertile ground for deepfake production”.

"There is a strong possibility that its production will be outsourced, alongside home-made ones. Our fact-checking usually takes a bit of time, but deepfakes might not give such time before causing violence,” he adds.

 ... And A Good Time For Scammers 🎭

In India, fraudulent videos of celebrities and industrialists promoting scams are rampant across social media and messenger apps.

Decode has extensively reported on deepfake videos of Shah Rukh Khan, Ratan Tata, Sadhguru and other prominent individuals promoting fraudulent funds and siphoning off people’s money.

This problem is now global. Take for example the case of Claudia Sheinbaum, former mayor of Mexico City, who could be seen promoting a pyramid scheme in a video that went viral in Mexico recently.

A digitally created copycat of Sheinbaum introduces a “marvelous opportunity” to turn an initial investment of just 4000 pesos (₹ 19,000) into 100,000 pesos (₹ 4.8 lakh) - every month!

The former mayor took matters into her hands and took to social media to call out the fraudulent videos, and has taken legal actions against its creators in her country.

If folks like Swift, Biden and Sheinbaum can be targets, so can you - and that is why the dialogue around the ethical use of such technologies has never been more critical.

🔧 AI Detection Tools: Eleven Labs 

Eleven Labs is currently one of the most popular tools out there to artificially put voices to photos and videos. 

In the past, Decode has found this tool being abused to create scam videos, making prominent personalities promote fraudulent investment opportunities.

Fortunately, Eleven Labs also provides an AI voice cloning detection tool called AI Speech Classifier, that helps detect whether an audio clip was created using their tool.

This is a classic example of how AI can also provide us the tools to curb its abuse.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

↗️ Was this forwarded to you?  Subscribe Now 

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected]. 

🖤 Liked what you read? Give us a shoutout! 📢

Twitter icon
Instagram icon
LinkedIn icon
Facebook icon
YouTube icon
Website icon

↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers

Copyright (C) " target="_blank">unsubscribe