- Deepfake Watch
- Posts
- Deepfake Watch 33
Deepfake Watch 33
Beyond Deepfakes: India’s AI Election | Deepfake Watch
Browser View | October 04, 2024 | Subscribe
With the AI hype came hyped-up expectations of a deepfake doomsday, during India’s general elections earlier this year. While the reality did not match that hype, AI was indeed deployed in various ways by political parties across the spectrum.
AI image generators, fueled by billions of dollars of funds, have led to an explosion of child sexual abuse material on the darknet, and many of them have been trained on real CSAM content. Researchers are fighting to take them down.
In the past few weeks I have mentioned the risks of facial recognition technology, along with the rise of AR glasses. A recent project by a couple of Harvard students integrated the two technologies together to arrive at a disturbing outcome.
Opt-in to receive the newsletter every Friday.
A quick summary of India’s first AI Election
BOOM’s Deputy Editor Karen Rebelo authored a report for UT Austin Center for Media Engagement highlighting the sudden explosion of AI in electoral campaigns during the Indian general elections last summer.
While sophisticated deepfakes were scarce, political parties deployed AI in various other ways, without any oversight whatsoever. Here are some of the highlights from Karen’s report:
Multilingual voter outreach: An AI translation tool called Bhashini, built by the Ministry of Electronics and Information Technology, was used to translate Narendra Modi’s speeches from Hindi into multiple regional languages. Modi critic and YouTuber Dhruv Rathee used a similar tool to translate his videos from Hindi to Tamil, Telugu, Bengali, Marathi and Kannada.
Internal Deepfakes: Political parties used deepfake algorithms to generate likeness of popular dead politicians from their parties to garner support (example - late Tamil Nadu Chief Minister M. Karunanidhi and late CPI(M) leader Buddhadev Bhattacharya).
Deceptive Deepfakes: We did see a number of deepfakes created with the intent of spreading disinformation. While the volume of such content did not reach the expectation, they targeted high profile candidates, celebrities and public figures.
Satire and Memes: Congress and BJP competed with each other throughout the election season on the use of deepfakes to satirise members of the other party and their allies. Modi also personally endorsed a satirical deepfake of him made by a right-leaning account, to mock West Bengal CM Mamata Banerjee, who threatened legal action against those satirising her likeness with AI.
Liar’s Dividend/Mislabelling: Deepfake was also used as an excuse to dismiss real content. Furthermore, “cheapfakes” - made using traditional editing tools - were found to be wrongly labelled as deepfakes.
AI-led microtargeting: AI trained on dubiously-gathered voter data was used extensively by parties to micro-target voters with highly personalised synthetic content over WhatsApp or robocalls.
The report also highlights the complete lack of oversight from India’s Election Commission or Big Tech companies in addressing the widespread use of AI in the general elections.
Read the full report here.
The Menace Of AI-Generated Child Porn
Last December, researchers at Stanford Internet Observatory (SIO) found that popular text-to-image generation models, such as Stable Diffusion 1.5, have been trained on real images of child sexual abuse.
These tools, fed with millions and billions of dollars in funding, were being used to create AI-generated child sexual abuse material (CSAM). Stable Diffusion 1.5, created by Runway, with funding from Stability AI, has been hosted on Hugging Face and Civitai, and was found being used by pedophiles on the dark web to create AI-generated CSAM. Some users were found creating their own custom models by using real child sex abuse content to generate images of particular victims.
Stable Diffusion 1.5 was found to be the most downloaded AI image generator or Hugging Face, with over 6 million downloads. After researchers David Evan Harris and Dave Willner reached out to Hugging Face, asking them why the well-known CSAM image generator was allowed to thrive on their platform, it was eventually taken down.
According to Hugging Face, it was Runway who took down Stable Diffusion 1.5. Willner and Harris note that it was still available for download on Civitai.
FRT+AR Glasses= Real-time Doxing Nightmare
Last week I spoke about Meta’s ambitions regarding high-tech AR glasses, which may soon creep into our lives.
A pair of Harvard students recently integrated facial recognition technology into Meta’s Ray Ban smart glasses, and were able to retrieve the identity and personal information of strangers viewed using the glasses…..in real time.
📖 Read: Someone Put Facial Recognition Tech onto Meta's Smart Glasses to Instantly Dox Strangers | 404 Media
While Pimeyes - the facial recognition technology used by the creators - can be used with any camera, they specifically used Meta’s AR glasses due to the discrete nature of the lenses, as opposed to holding a smartphone camera, the creators told 404 Media.
You can request Pimeyes to block any lookups of your face by clicking here.
No doubt, such technology will be incredibly dangerous in the hands of scammers and stalkers, who could choose to target pretty much any individual they meet on the street.
Murder victim impersonated by Character AI
An anonymous user used popular AI chatbot platform Character AI to create a chatbot based on the likeness of teenage murder victim Jennifer Ann Crecente, who was shot and killed 18 years ago.
Character AI promises life-like chatbots, who can converse like real people that users can create and customise.
The company was recently paid $2.7 billion by Google for a one-off license to its tech, and to secure the rehiring of Character AI’s co-founders Noam Shazeer and Daniel De Freitas, who had previously left Google due to disagreement over the release of Google’s AI chatbot.
Following a tweet by Crecente’s uncle Brian Crecente - who is also the founder of gaming platforms Kotaku and Polygon - the chatbot was removed by Character AI.
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
↗️ Was this forwarded to you? Subscribe Now
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers
Copyright (C) " target="_blank">unsubscribe