- Deepfake Watch
- Posts
- Deepfake Watch 26 (copy 01)
Deepfake Watch 26 (copy 01)
The Real Deepfake Election Is Nigh | Deepfake Watch
Browser View | August 23, 2024 | Subscribe
With only months away from the presidential elections, the United States is ripe for a political deepfake slugfest. Deepfakes have been embraced by some of the most influential people in the country - including one of the presidential candidates.
Although more accessible than ever, deepfaking tools are still in their cribs - especially when it comes to dealing with voices and accents that are not in English. While the much dreaded deepfake doomsday is yet to show its true face in the ‘year of elections’, it’s not entirely behind us.
And the US - still reeling from the consequences of the 2020 stolen election conspiracy theory that led to the January 6 riots - better be ready for the worst.
Opt-in to receive the newsletter every Friday.
How I Learned To Stop Worrying And Love The Deepfake
I’m borrowing this subheading from Stanley Kubrick’s cold war black comedy masterpiece Dr. Strangelove, which satirises the development of a nuclear holocaust. But blowing things up is so outmoded, so passé - it’s much easier to weaponise social media and dismantle public discourse.
Shortly after Joe Biden stepped down from the race, paving the way for Kamala Harris, Elon Musk shared a well-done voice clone of Harris making a self-deprecating speech. “This is amazing 😂,” said the billionaire. (No disclaimer, of course).
Around 10 days ago, Donald Trump started tweeting again after over three and a half years of forced hiatus from the platform, and he’s also embracing AI. He posted an AI-generated image of Harris speaking to a large audience in a Soviet-like setting - hammer, sickle and all - furthering the narrative that she’s a Communist. He also posted a bizarre video of what appears to be him and his new best friend Musk dancing to ‘Staying Alive’.
Testing the boundaries, he posted a collage of deepfakes showing Taylor Swift and her Swifties coming out in support of Trump on his platform Truth Social. “Taylor wants you to vote for Donald Trump,” said one of the images showing an Uncle Sam t-shirt-clad Swift deepfake.
The Trump campaign sure does love AI - and how far they’re willing to (and are allowed to) push it, remains to be seen, as we get closer to the polling date. And Musk’s AI chatbot Grok is more than happy to help.
The Grok image generator that debuted last week has risen in popularity for its higher tolerance for prompt-engineering mischief, as compared to its counterparts. As a result, the platform is being flooded with offensive, violent and misleading AI content.
Just how dangerous are deepfakes and AI to democracy?
We have spent too much time obsessing about a barrage of sophisticated, highly-realistic deepfakes, but should we be dismissing these ‘silly’ ones that are ‘obviously not real’?
Information literacy expert Mike Caulfield wrote in his blog that “the vast majority of misinformation is offered as a service for people to maintain their beliefs in face of overwhelming evidence to the contrary.”
Researchers are still struggling to quantify the influence of such content on people’s voting behaviour, and it may take a few more elections and a lot of chaos until we’ve finally figured it out (hopefully).
As for the upcoming US elections, we are sure to bear witness to a lot more bizarre visuals as the campaigns go on overdrive.
Her, Or Not
Remember when OpenAI tried to launch a AI voice assistant that sounds like Scarlett Johansson, and eventually dropped it when the actor sued the company? They tried to imitate the Spike Jonze film ‘Her’, where the protagonist falls in love with a highly advanced voice assistant, voiced by Johansson herself.
Turns out, people are actually getting cozy - and hooked - to these voice assistants.
Such ‘emotional reliance’ on AI chatbots could have serious consequences, according to a recent article by Vox. It cites a study by MIT Media Lab highlighting how those looking for caregiving roles from such chatbots could use prompts that could “elicit precisely this behaviour”.
If such chatbots can provide an emotional relationship, could it push people away from building connections with real humans? OpenAI thinks this is a potential risk.
“Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions,” the company states in its report. This basically suggests that if you get too used to being prioritised by AI companions, you might go into real human interactions with such narcissistic expectations.
The Vox article raises a few pertinent questions about human relationships, and how much of it stands to be influenced or distorted by the arrival of synthetic companions. Certainly a great topic to study for behavioural scientists.
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
↗️ Was this forwarded to you? Subscribe Now
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers
Copyright (C) " target="_blank">unsubscribe