- Deepfake Watch
- Posts
- Deepfake Watch 12
Deepfake Watch 12
It’s Not The Tech, It’s the Intent | Deepfake Watch
Browser View | April 19, 2024 | Subscribe
Indian politics is in the deepfake business, and business is a-boomin’. ‘Deepfakes’ have been intensely discussed this week - BOOM has debunked quite a few, and simultaneously we found real videos being called deepfakes.
For those of you who attended our event ‘Decoding Deepfakes and Elections’ in Delhi last Monday, I’ve taken the liberty of adding you to our mailing list. Please free to opt out any time, if you don’t wish to see this in your inbox.
Opt-in to receive the newsletter every Friday.
“Deepfake tech isn’t bad. Intent is the question.”
Earlier this week in Delhi, Decode gathered fact-checkers, academicians, along with tech and policy experts for a discussion around ‘Deepfakes and Elections’.
Here’s the general consensus of the panel
Deepfakes is a worrying trend.
People generating mis/disinformation are only limited by their imagination since it's extremely easy to create fake stories.
The technology used to create false information is better than the one used to detect it.
Investment is more in the market of generating deepfakes than the one detecting it.
Mayank Vatsa, professor of computer science at IIT Jodhpur, made a very veritable remark - fact-checks faced with verifying deepfakes from real content are now dependent on technology.
On the other hand, real videos are being called deepfakes. BOOM’s deputy editor Karen Rebelo rightly called it “the liar’s dividend'’ - liars avoiding accountability by exploiting an information landscape saturated with falsities. Pointing to the much harder scenario of fact-checking real videos being labelled as deepfakes, Karen asked “How do we watermark reality?”
What a mess we’re in!
Here are some of the AI-related fact-checks we did this week
Congress Functionaries Share AI Voice Clone Of Aamir Khan Targeting PM Modi
Video Of BJP's Nirahua Is Not A Deepfake; Amit Malviya Makes False Claim
Video Of Ranveer Singh Criticising PM Modi Is A Deepfake AI Voice Clone
Fact Check: AI Voice Clone Video Of Rahul Gandhi’s Resignation Viral Online
AI Generated Image Viral As Photograph of Total Solar Eclipse 2024
What else should you be reading
An investigation by Decode found at least eight chatbots on OpenAI’s GPT Store targeting the Indian elections. These chatbots appear to violate OpenAI's policies, which prohibit using their technology for political campaigning.
You might notice a new chatbot appearing on Instagram, Facebook and WhatsApp. Meta is pushing out its AI assistant everywhere. The chatbot can now be accessed through the website Meta AI as well. The social media giant has simultaneously announced its latest AI model Llama 3, giving a competition to OpenAI’s GPT4.
Microsoft takes one step closer to “impersonating humans”
The company released a new AI research paper that gives a glimpse of VASA-1.
The paper’s TLDR describes it as “a single portrait photo + speech audio = hyper-realistic talking face video with precise lip-audio sync, lifelike facial behaviour, and naturalistic head movements, generated in real time.”
Those are all the words we don’t like in the same sentence because it's a recipe for misuse.
Okay but what does that mean in English? It means you could soon take a headshot photo and animate it like a video and add audio to it, and oh, add perfect lip sync too. Yikes!
📑 Read: VASA-1 - Microsoft Research
Microsoft acknowledged that the development had potential to mislead or deceive but that it had no plans to release an online demo, API, additional implementation details, or any related offerings until it is certain that the technology will be used responsibly, and in accordance with proper regulations.
UK is criminalising creation of non-consensual deepfake porn, but..
Clare McGlynn, professor of Law at Durham University, welcomed the new regulation, but pointed out a major limitation.
While it plans to criminalise the creation of non-consensual sexually explicit deepfakes, the law requires evidence of malicious motives behind creation.
Professor McGlynn writes in a LinkedIn post, “The malicious intent requirement gives dedicated deepfake porn websites and nudify apps a get out clause. They can continue to justify themselves on the basis they are 'fun' and humorous. If all creation was unlawful, there would be no justification for these websites and apps and comprehensive action could be taken against them.”
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
↗️ Was this forwarded to you? Subscribe Now
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers
Copyright (C) " target="_blank">unsubscribe