Deepfake Watch 10

Lavender - The Killer AI | Deepfake Watch

Browser View | April 05, 2024 | Subscribe

Long ago, when we imagined AI coming for our lives, we made up scary robot-like sentient entities who were coldblooded killers.

Like the T-800 (from Terminator), HAL 9000 (from 2001: A Space Odyssey), or the sentinels (The Matrix). But in reality (so far), the killer AI was found to be a machine simply printing names. It was humans with deadly weapons who did the rest.

Opt-in to receive the newsletter every Friday.

Automating The Kill List

Over 33,000 Palestinians have been killed in Israel’s brutal military campaign in Gaza, following the October 7 attack by Hamas.

A recent investigation by Israel-based outlets +972 Magazine and Local Call has now revealed that the Israeli army has been using an AI-based programme called Lavender to generate its list of targets.

Sources in the army, who had reportedly served in the ongoing conflict, told these outlets that as many as 37,000 Palestinians were marked as suspects by the Israeli army entirely based on Lavender’s recommendations during the first few weeks of the war. Their homes then became targets.

The report also revealed that no further verifications were made following Lavender’s kill recommendations, with no attempt to check why so-and-so were chosen. This was despite the knowledge that the system made errors in 10% of the cases.

The report also added that other automated systems - including one called “Where’s Daddy?” - were used to trace targeted individuals and carry out airstrikes while they were at home, in presence of their entire families.

The devastation of lives in Gaza has primarily affected civilians, with a major portion of those killed being children.

Deepfaked by best friend

In yet another heartbreaking story, a woman in England - who has been targeted with deepfake pornography and vicious harassment - found out that her best friend was behind it.

During the COVID-19 lockdown, Jodie (name changed for anonymity) found private images of herself being shared on Twitter, with captions suggesting she was a sex worker.

She was not the only one - she also found images of girls she knew from her university, and her hometown in Cambridge. Eventually, she teamed up with yet another victim, and close friend, called Daisy (name changed for anonymity).

Jodie and Daisy figured out that the person sharing these images must be someone they both knew, and started listing out all the mutual contacts on their private social media accounts.

After a few false flags, Jodie came across some deepfake porn photos of hers that provided a clue - a picture of hers that she shared privately with her close friend Alex Woolf.

Friends since they were teenagers, Alex had comforted Jodie through the online harassment - which had led Jodie to dismiss him as a suspect at first. But the deepfake nude made it certain.

Speaking to BBC’s File On 4, Jodie said, “I re-lived every conversation that we had, where he had comforted me and supported me and been kind to me. It was all a lie.”

26-year-old Woolf was convicted in August 2021 of taking non-consensual images of 15 women, including Jodie, and distributing them on pornographic websites.

The penalty was a 20-week prison sentence, suspended for two years, and a compensation of £100 to each of his victims.

While many countries are coming up with fresh regulations on creation of non-consensual deepfake pornography, the anonymity provided by such tools on the internet still allows most culprits to escape any consequences.

Kejriwal singing songs in jail?

Shortly after Delhi Chief Minister Arvind Kejriwal was remanded to judicial custody on April 1, BJP functionaries on social media started posting a video that purportedly showed Kejriwal singing songs, while dressed in prison attire.

The voice singing appeared to closely resemble that of Kejriwal’s, leading us to suspect it to be a deepfake.

Nivedita Niranjankumar, news editor at BOOM, analysed the images used in the video and found several anomalies. For example, Kejriwal has more than five fingers on each hand in a photo. While in another, Kejriwal's hand looks meshed with no clearly defined fingers visible at all.

She also found similar discrepancies in the other images included in the reel, starting with distorted toes of Sunita Kejriwal (bottom left) and a sitting Kejriwal (bottom right). In the photo on the right panel below, the uniformed officers seem to have molten faces blending into the bars of the jail cell. Sunita Kejriwal's face also looks disproportionate to her body, indicating that the face has been digitally inserted into the image.

Furthermore, BOOM fact-checkers also ran the audio clip containing Kejriwal singing through a tool called Itisaar.ai, developed by the Image Analysis and Biometric Lab (IAB) at the Indian Institute of Technology (IIT) Jodhpur, which determined the voice is an AI-based deepfake.

This is yet another example of how satire is used as cover by political parties to attack rivals on social media using AI, while evading penalties from social media companies.

Namo In Tamil

Talking about cloned voices, did you know of this account on Twitter called ‘Narendra Modi Tamil’?

The account, with over 30,000 followers, posts videos of speeches by India’s Prime Minister, dubbed in Tamil. However, the dubbed voice sounds very similar to that of Modi’s.

That’s because a voice cloning software has been used to create the dubbed speeches. BOOM ran the audio clips of these videos through two different detection tools, with both confirming that AI was used to create these speeches.

How To Spot A Deepfake?

In a recent article, Associated Press came out with some simple tips on how to spot the deepfakes from the real ones.

It had a few common guidelines most of us fact-checkers use to spot a deepfake instinctively.

The most common one for AI generated images is the unnatural smoothness, especially on someone’s skin. Ajder does warn that such an inconsistency can be removed through creative prompts.

Other tips were looking at the image, or the video, in detail - finding small inconsistencies - like blurry teeth, badly synchronized lip movements, comparing facial skin tone with the rest of the body.

But the most important tip given in the article was the matter of context. Taking from Poynter’s highly helpful tips on spotting deepfakes, it adds that if you see a public figure engaging in something that appears to be “exaggerated, unrealistic or not in character”, chances are that it's a deepfake.

Remember, in the era of deepfakes, seeing is misleading. Especially if you’re seeing through a screen.

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

↗️ Was this forwarded to you?  Subscribe Now 

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected]. 

🖤 Liked what you read? Give us a shoutout! 📢

Facebook icon
Instagram icon
Twitter icon
Website icon
YouTube icon
LinkedIn icon

↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers

Copyright (C) " target="_blank">unsubscribe