Deepfake Watch 30

Grok It Like It’s Hot | Deepfake Watch

Browser View | September 13, 2024 | Subscribe

If you are a purveyor of disinformation, and are tired of chatbots being muzzled by their makers, worry no more. xAI’s legacy chatbot Grok will not shy out in helping you spread all sorts of lies. Trigger warning - I’ve included a bunch of images generated by Grok, and they might offend you!!

After Taylor Swift was deepfaked into falsely supporting Donald Trump, the US pop icon publicly came out in support of Democratic Presidential candidate Kamala Harris.

Also, there is a new deepfake detection tool in town!

Opt-in to receive the newsletter every Friday.

Musk’s chatbot spells trouble

Just ahead of the Indian elections, I met some folks at Microsoft AI and OpenAI, who expressed apprehension about the potential for abuse of their AI tools to further boost India’s thriving disinformation landscape. There was a general consensus that guardrails are going to be imperative, to curb any potential attempt to mislead voters, to be safe than sorry.

But not Elon Musk, the maverick billionaire - who turned Twitter from a fertile ground for trouble makers, into a carnival of skullduggery. And xAI’s legacy chatbot Grok, introduced as a “sassy AI chatbot with a dash of rebellion”, and presented by Musk as an “anti-woke” chatbot, is pushing the limits further.

Unlike its competitors, Grok is void of too many filters, and can be easily prompted to create NSFW content, and highly outrageous, offensive and shocking images and videos.

Twitter has been filled with Grok-generated images of Mickey Mouse with a cigarette and beer, Trump and Harris having ice-cream at the beach like a couple, and former president Barack Obama about to snort a like of cocaine, among others.

Filmmakers The Dor Brothers used Grok to make a series of AI-generated videos of Pope Francis, Joe Biden, Vladimir Putin, Musk, Trump, Harris, Obama and Hillary Clinton robbing a grocery store at gunpoint, which went massively viral on social media.

While some of these videos and images still look unrealistic, they can easily be used to turbocharge electoral disinformation ahead of the US presidential elections. Remember, even without deepfakes, prevailing electoral disinformation was still able to push conspiracy theorists into attempting an insurrection in January 2021.

Shortly after Joe Biden announced his withdrawal from the race, X was flooded with posts claiming that it was too late for a new candidate to replace him. And the source of this false information was found to be Grok.

The National Association of Secretaries of State, an organisation representing a group of secretaries of state, reached out to Grok to flag this, but received an unsatisfactory response. Some of them took it public, writing an open letter to Elon Musk, which eventually pushed the company to direct people to an official US government website vote.org, when users prompted Grok to answer questions around voting.

Musk also ran into trouble over Grok’s training data in Ireland last month, after a series of complaints were lodged with the Irish Data Protection Commission (DPC) over user data of European X users being taken without consent by the company to train Grok.

The court proceedings were dropped in Ireland after the company agreed to not use any EU user’s tweet to train Grok. The DPC has now asked the European Data Protection Board, the EU regulatory body enforcing data protection, to take a call on whether X did breach any laws under the Digital Services Act.

So it is very much possible to rein in powerful companies and individuals behaving as loose cannons, as long as institutions are willing to use the stick when required.

If you are a non-EU X user, and want to opt out of the company’s use of your tweets to train Grok, click here.

Taylor Swift - Frequent Victim Of Deepfakes - Backs Kamala Harris

Donald Trump had recently posted a series of deepfakes of Taylor Swift and her fans endorsing him for president.

To counter this, the pop icon with a massive following finally made public her support for Trump’s opponent Kamala Harris, while touching upon the dangers of AI, in an Instagram post.

“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth,” she wrote in her post.

Introducing: DeepFake-o-meter

A team of researchers at University at Buffalo’s Media Forensics Lab, led by Professor Siwei Lyu, recently launched their deepfake detection tool DeepFake-o-meter, which ‘integrates 18 cutting-edge detection algorithms for AI-generated audio, images, and videos’.

This is a step towards providing regular internet users with access to tools which could help them distinguish between real and synthetic media.

You can sign up for the tool by clicking here.

Latest In AI And Deepfakes

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

↗️ Was this forwarded to you?  Subscribe Now 

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected]. 

🖤 Liked what you read? Give us a shoutout! 📢

Facebook icon
Instagram icon
Twitter icon
LinkedIn icon
YouTube icon
Website icon

↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers

Copyright (C) " target="_blank">unsubscribe