Deepfake Watch 2

AI Images Of Palestinian Suffering | Deepfake Watch

Browser View | February 9, 2023 | Subscribe

AI-generated images and videos are literally everywhere now!

Its nigh-omnipresence in our social feeds has been concerning enough for big-tech player Meta to come out with an AI-labeling policy.

Meanwhile, Minister of State for Electronics and Information Technology Rajeev Chandrasekhar recently said we should not fear this technology, that it’s transformative. Sure, there are some great benefits, but some of the recent cases of its abuse are spelling downright Huxleyan dystopia, with some Orwell on top.

A swarm of deepfakes of Pakistani politicians was unleashed on social media, ahead of the general elections on Thursday.

Residents of Dubai suddenly had their television streaming interrupted last Sunday by an AI-generated anchor showing Israeli atrocities in Gaza.

Also, we have more Taylor Swift deepfakes, Palestine-related synthetic media, and a new detection tool.

Opt-in to receive the newsletter every Friday.

“Imagined With AI”

Meta was recently pulled up by an oversight board for gaps in their manipulated media policy, after deepfake videos and audio clips of US President Joe Biden went viral.

Meta responded, saying they will mark all content made with the company’s own AI image generator as “Imagined With AI”, and is building tools to identify invisible markers for images made not just by them, but also by Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.

In its press release, Meta calls AI a ‘sword and a shield’, citing how AI systems are helping them detect harmful content.

But it also admits the problem - invisible markers can be removed, and detection tools are still playing catch-up. Can it catch up before the many elections this year? That’s what we worry about.

Because It’s Already Happened - In Pakistan

Pakistan went to polls amid massive concerns of election rigging and poll violence. In all of this, AI-generated disinformation hit social media.

A deepfake audio clip of former Pakistan Prime Minister Imran Khan - who has been languishing in jail since last year - emerged ahead of the elections, where he is allegedly asking supporters of his party Pakistan Tehreek-e-Insaf (PTI) to boycott the elections.

A video of Khan’s legal advisor Nadeem Haidar Panjutha also appeared online, calling people to support PTI by refraining from voting. This turned out to be a deepfake, posted by an imposter account on X.

Another deepfake video popped up showing PTI leader Muhammad Basharat Raja allegedly withdrawing as a candidate from NA segment 55, in Rawalpindi.

This appears to be a coordinated effort to dissuade PTI supporters from voting in the poll using highly realistic deepfakes.

PTI took to Twitter to highlight the deepfake campaign. How much damage was done with these deepfakes is uncertain, as we are yet to know the results of the polls.

Artificial Intelligence and deepfakes takeover Pakistan elections • FRANCE 24 English

AI Anchor Shows Up On UAE TV Stream Unannounced

Last Sunday, certain residents of Dubai had their TV stream interrupted by an AI-generated anchor bringing to them the horrors of Israel’s onslaught on Gaza.

Analysts at Microsoft say that this was a hacking operation by a group known as Cotton Sandstorm, run by Islamic Revolutionary Guards, a branch of the Iranian armed forces.

This non-human anchor showed unverified, graphic images, claiming to be of Palestinians injured and killed from Israeli airstrikes.

Not Another Taylor Swift Deepfake

After her AI-generated nudes broke the internet, another Taylor Swift deepfake appeared, falsely showing her promoting Donald Trump’s ‘stolen election’ conspiracy theory at the Grammys.

Meanwhile, disinformation research firm Graphika looked into the origins of Swift’s now deleted AI nudes, and found that it came from the internet’s very own bowels - 4chan. UGH!

It found a thread that encouraged people to evade filters and safeguards built by AI image generators. Clearly, there is cause for concern, and fear (as opposed to what our IT Minister says), as bad actors are working hard to find ways to abuse AI.

Do We Need AI Images Of The Palestinian Suffering?

Over 11,000 children have been killed in Israeli airstrikes on Gaza, making it one of the worst human catastrophes for children in modern times.

This war has generated a series of stomach-churning footage and images of actual suffering and devastation.

So we were surprised that people were resorting to using AI-generated images of children sleeping in muck, to seek sympathy for the plight of the Palestinians.

🔍 AI-Spotting 

To find out whether those images of Palestinian children sleeping in inhumane conditions were AI-generated or not, we used a detection tool provided by Hive.

The tool comes in the form of a Chrome Extension that can be enabled with a few clicks.

You can then go to the image you want to check, right click, and click on the “Hive AI Detector” option. Works like a charm!

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

↗️ Was this forwarded to you?  Subscribe Now 

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected]. 

🖤 Liked what you read? Give us a shoutout! 📢

Twitter icon
Instagram icon
LinkedIn icon
Facebook icon
YouTube icon
Website icon

↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers

Copyright (C) " target="_blank">unsubscribe