- Deepfake Watch
- Posts
- Deepfake Watch 35
Deepfake Watch 35
Meta AI Supercharges India’s Hate Factories | Deepfake Watch
Browser View | October 18, 2024 | Subscribe
Two Nobel Prizes for AI this year. Sure, they were awarded to human researchers (for now), but we’re at the cusp of unleashing a technology that could exceed human intelligence - a first of its kind.
And so far, the results are all good, bad and ugly.
Take AI image generators, for example. Meta AI is being massively abused to create divisive and dehumanising images targeting Muslims in India, and so far the company has no plans to act on it.
Surely, nothing could go wrong, right?
Opt-in to receive the newsletter every Friday.
Islamophobia - Imagined With AI
Ever since AI text-to-image models hit the market, there have been worries of misuse of such tech in the hands of the wrong people. Clearly, Meta did not get the memo.
BOOM’s Deputy Editor Karen Rebelo recently jumped into a rabbit hole of AI-generated hate frenzy that has broken out on Meta’s many ubiquitous apps in India - WhatsApp, Instagram and Facebook. Bunch of AI-generated images depicting Muslim men as predators and paedophiles, or sabotaging railway infrastructure.
The integration of Llama 3 into these apps has armed everyone and anyone with text-to-image capabilities, to produce realistic images with very little effort. And while Meta claims to have put in prompt filters to prevent abuse, they seem to not work at all in the Indian context.
And context matters. A person wearing a skullcap, sitting by the railway tracks with a piece of stone may seem entirely harmless to Meta, but looking through the lens of India’s rapidly rising Islamophobic propaganda, it becomes powerful imagery to fuel hatred towards Muslims, furthering a “rail jihad” conspiracy theory.
After finding a number of such images on social media, Karen teamed up with a researcher to test harmful prompts on multiple image generators - Meta AI, Microsoft Copilot, Gemini, and Adobe Firefly - a total of 240 times across nine categories and four prompt types.
📖 Read: Exclusive: Meta AI’s Text-To-Image Feature Weaponised In India To Generate Harmful Imagery | BOOM
Apart from Gemini, other platforms accepted most of the prompts, with Meta AI producing the most photorealistic images. Acceptance rates were: Meta AI (92%), Adobe Firefly (92%), ChatGPT (90%), Copilot (71%), and Gemini (4%). View a detailed report of the prompt testing exercise here.
Meta was notified about these prompts being accepted, and the company did not see the problem. “After a careful review, we have found the content flagged by you to not violate our policies,” a spokesperson for Meta told Decode, over email.
One can argue that arming hate factories with the ability to produce a steady stream of highly provocative and divisive content for India’s already volatile society should definitely be violating at least a few policies.
Nevertheless, this is a big moment for AI
After Geoffrey Hinton and John Hopfield won the Nobel Prize in Physics for discovering artificial neural networks and laying the foundation for machine learning, the Nobel Prize for Chemistry was awarded to David Baker, Demis Hassabis and John Jumper for an AI model predicting protein structures, and for computational protein design.
“This is really a testament to the power of computer science and artificial intelligence,” Jeanette Wing, a professor of computer science at Columbia University, told Associated Press.
AI has finally made its mark in empirical science, and is no more just a plot-point for sci-fi literature. And it is bringing with it a new kind of power - that just currently happens to be mostly in the hands of some Big Tech companies. Sadly, safety concerns do not always appear to be their top priority.
Hinton himself has frequently expressed his concerns over AI safety, and has been strongly critical of the tech bros driving the AI hype. Just before a big party at Google headquarters after the Nobel win, Hinton reportedly said, “I’m particularly proud of the fact that one of my students fired Sam Altman,” while referring to former OpenAI board member Ilya Sutskever.
Latest in AI and Deepfakes
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
↗️ Was this forwarded to you? Subscribe Now
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel↪️ Join Our Community of TruthSeekers
Copyright (C) " target="_blank">unsubscribe