Deepfake Watch 18

Pile Of Political Deepfake Trash | Deepfake Watch

Browser View | May 31, 2024 | Subscribe

We were all worried about highly advanced deepfakes deceiving the Indian electorate. With the polls nearly concluded, what we got was mostly AI-generated political shitposting and trashy face-swapping.

With the exception of a few cases, the most frequent use of deepfakes by political parties was to troll their opponents with non-realistic satirical posts. There was no dearth of cheap AI voice clones either.

Experts have warned us though, that as the tech gets better, it won’t be long before the silliness turns into something worse.

Opt-in to receive the newsletter every Friday.

Deepfake trolling and cheap voice clones

In April, Meta revamped its rules around labelling of content generated using artificial intelligence, stating that a “Made with AI” label would be used where appropriate to indicate to its users about the synthetic nature of what they are seeing.

Since then, as India witnessed six phases of polling, Meta’s Instagram has been littered with deepfakes-led political shitposting by parties, to troll their opponents, with barely any labelling.

Handles of political parties, along with their proxy pages, have been in a face-swapping frenzy, to ridicule their opponents.

A deepfake tool found being used frequently was Viggle, which boasts of being able to 'mix a character image into a motion video', and 'animating the character with a text motion prompt'.

Viggle was frequently used to mock political leaders by making them dance.

Beyond satire, AI was also used for deceptive purposes.

BOOM’s Deputy Editor Karen Rebelo reported for Decode on how cheap quality AI voice clones were used to peddle disinformation right before Delhi went to polls on May 25.

BOOM debunked two videos made with fake graphics and voice clones of Hindi news anchors purporting to show AAP’s West Delhi candidate Mahabal Mishra ahead in opinion polls.

We also debunked a ‘leaked audio' claiming to be a phone call between Rajya Sabha MP Swati Maliwal and YouTuber Dhruv Rathee, and found it to be voice clones.

Karen’s report highlights how voice clones were the most preferred form of AI-led spoofing used to spread disinformation during the Lok Sabha elections. It’s cheap, easy to make, and hard to detect.

Professor Mayank Vatsa, whose department at the Indian Institute of Technology Jodhpur has built a deepfake detection tool called Itisaar, told Karen that such voice clones would only get more sophisticated.

"Most of the existing deepfake research and generation efforts, predominantly in the West, focus on the English language. For Hindi or other local languages, significant investment and dedicated teams are required to train the models, which not everyone can afford or accomplish. This likely impacted the overall generation of sophisticated deepfakes where synthetic or altered audio in local languages is seamlessly lip-synced, similar to what is achieved in English," Professor Vatsa explained.

Studies on perception of AI

Two different reports were released this week that studied people’s perception of AI.

A report by Adobe titled "Future of Trust Study for India" questioned Indians on their views on online content, and found:

  • Around 86% think that misinformation and deepfakes will have an effect on upcoming elections.

  • Around 82% think it should be forbidden for political candidates to use generative AI in their campaigns

  • Around 93% want to know if the content they consume is AI-generated

  • Around 92% believe it's essential that they are provided with tools to verify such content

Read the full report here.

Yet another report published by Reuters Institute surveyed people in Argentina, Denmark, France, Japan, the UK, and the USA, to understand “if and how people use generative artificial intelligence, and what they think about its application in journalism and other areas of work”.

Most respondents in four of the six countries believed that generative AI will make their life better, and a significant minority believe that it will make their life worse. Furthermore, people appeared to be more pessimistic about generative AI’s influence on society.

Read the full report here.

This week’s fact-checks on deepfakes

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

↗️ Was this forwarded to you?  Subscribe Now 

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected]. 

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers

Copyright (C) " target="_blank">unsubscribe