- Deepfake Watch
- Posts
- Deepfake Watch 6
Deepfake Watch 6
Belling The AI Cat | Deepfake Watch
Browser View | March 8, 2024 | Subscribe
A response by Google’s Gemini AI has cascaded into a flurry of social media reactions from Union Ministers, leading to a highly confusing knee-jerk advisory to regulate AI in India.
Meanwhile EU lawmakers have agreed upon its landmark Artificial Intelligence Act, which will likely come into effect in 2025.
Can we really bell the AI cat with these regulations?
Trump supporters are using AI to woo Black voters, resulting in viral images of Trump chilling with people of colour.
AI is writing huge amounts of fake obituaries, while a BJP candidate in UP has opted out of the race following an obscene video of him surfacing, which he calls a deepfake.
Opt-in to receive the newsletter every Friday.
AI Has Donald Trump Chilling With Black Folks
Haters have been accusing former US President Donald Trump of racism, and supporting white supremacists. But just look at the photo below! He is a hit with the coloured folks!
He also likes to hang out in Black neighbourhoods, and play cards on the streets with Black folks! Can he really be racist?!
Ok, you’ve probably spotted the watermarks and have figured out where I am going with this. Yes, all these images are AI-generated - they’re fake and spurious!
Some of them were initially shared by satire pages, but were picked up and shared by Trump supporters to show him as a popular figure among Black communities.
A report by the BBC found that many young Black voters are susceptible to being influenced by deceptive campaigns that use such images to win Trump the Black vote.
And it might just work! A recent poll by New York Times and Siena College found that only 66% of Black voters would vote for Joe Biden, if the polls were to be held today. In contrast, 87% Black voters reported voting for Joe Biden in 2020.
If a former military enforcer with a cruel past (kidnapping, torture) can transform himself into a cuddly, cute uncle in Indonesia with AI, why cannot a white supremacist-enabler with a racist history become a hit with people of colour using the same tools?
The Curious Case Of A GOI Advisory
Someone asked Google’s Gemini AI, “Is Modi a fascist?” And everyone lost their minds!
First, the chatbot responded with ambiguity that alluded to the accusations of fascism the Indian Prime Minister has faced. The Indian government was not happy!
Then, an advisory to rein in generative AI was immediately rushed into, requiring tech companies to seek approval from the Indian government before releasing AI tools that were “unreliable” or “under-tested”. This led to more confusion.
The Minister of State of Electronics and Information Technology Rajeev Chandrashekhar then tweeted out clarifying that the advisory is aimed at “large platforms and will not apply to startups”. It is also “aimed at untested AI platforms from deploying on Indian Internet”. Confusion remained.
While he has invoked Section 3(1)(b) of the intermediary rules to back the advisory (read: due diligence, reasonable efforts), the terms “under-tested” and “unreliable” have not been defined when it comes to AI, and it has puzzled a lot of people. Just check the responses to the minister's tweet.
In an election-filled year, when the careful reining-in on AI tools have been stressed on by human rights activists and researchers around the world, experts in India feel the GoI’s knee-jerk advisory is a missed opportunity to protect Indian internet users from those who wish to abuse AI.
EU Artificial Intelligence Act
Meanwhile, European lawmakers have finalised on the long-awaited EU Artificial Intelligence Act, which was built upon the existing General Data Protection Regulation (GDPR) and the Digital Services Act (DSA).
The EU AI Act takes a risk-based approach to regulating AI, identifying four levels of risk from ‘minimal risk’ to ‘unacceptable risk’.
Anything that falls under the category of “unacceptable risk” - such as those that pose a threat to the “safety, livelihoods and rights of people” will be banned under this act.
Those under “high risk” include AI used in critical infrastructures, educational or vocational training, law enforcement, migration, asylum and border control management, administration of justice and democratic processes, among others.
Application of AI in such high-risk fields will be subjected to strict obligations.
The EU AI Act has received mixed reviews, with some critics calling it overcomplicated. Experts have also criticised it for not considering third party conformity assessment for high-risk AI applications.
Fake Obituaries, On Steroids
“Obituary scraping” is a nefarious online scam that steals obituaries from local funeral home websites, and republishes them for clicks, and sometimes collects fake orders for flowers and gifts. A recent report by The Verge found that AI tools have enabled mass scraping and republishing of such content, sometimes “killing off” individuals who are still alive.
Take for example the case of Washington Post reporter Brian Vastag. Brian had co-authored an op-ed on long-term chronic fatigue among COVID-19 patients, with Beth Mazur, who ran a website for patients with chronic illnesses.
According to The Verge article, after Beth passed away last December, her obituary was scraped and republished with AI all over the internet. However, a different version of the same obituary also popped up about Brian himself, who is still alive.
Only the names and the places were changed to reflect Brian’s location.
Barabanki’s “Deepfake” Video Scandal
Following the release of the first list of candidates by the BJP for the Lok Sabha elections, a video went viral showing the party’s candidate for Uttar Pradesh’s constituency Barabanki (and also its current MP) Upendra Singh Rawat in a “compromising position” with an unknown woman.
Rawat immediately put out a statement calling it a deepfake. An FIR was also filed by UP Police, and Rawat eventually opted out of the race until the matter was settled.
Yet, we do not have conclusive evidence to state whether the video was in fact a deepfake, or real. Or whether it actually showed him or someone else altogether.
Ballots and Bots
Software Freedom Law Center India invited researchers, technologists, lawyers and media experts for a discussion on “Navigating AI’s Impacts on Elections”.
The discussion was centered around the rapidly advancing field of artificial intelligence, and the catch-up regulatory efforts by lawmakers around the world.
Some interesting points raised during the discussion and the Q&A session:
Indian government advisory was discussed at length, and some of the panellists felt it was entirely insufficient to deal with the actual problem
While Indian users will eventually adapt to AI, the novelty of such technology might have an immediate impact on those who are yet unaware of it
Watermarking solutions were questioned, as open-source tools could be used by bad actors to bypass watermarks
View a recording of the event here.
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
↗️ Was this forwarded to you? Subscribe Now
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers
Copyright (C) " target="_blank">unsubscribe