BJP Is Normalising AI Hate

Manufactured visuals, mass fear, zero guardrails.

In partnership with

A Merry Christmas to all of you! 🎄

Ever since I started this newsletter, I keep coming back to the same point: AI doesn’t invent new forms of hate or disinformation. It supercharges existing ones.

And it’s not just me saying that. Researchers, regulators, and pretty much everyone watching this space has warned that generative AI will make divisive content bloom at industrial scale.

And we are observing the escalation step-by-step.

Last year, soon after Meta AI rolled out its image generation tool, my colleague Karen Rebelo stumbled upon a disturbing trend. A growing cluster of synthetic images showed bearded men in Islamic attire cast as criminals or threats; depicted assaulting women and girls or sabotaging railway lines.

Working with digital anthropologist Himanshu Panday, she documented how AI text-to-image tools were being weaponised in India to mass-produce hateful visuals targeting Muslims. Their investigation found that Meta’s safety guidelines fell flat when confronted with hate messaging rooted in non-Western, regional contexts, allowing coded and culturally specific Islamophobic imagery to slip through largely unchecked.

Over a year later, things have only gotten worse. AI creation tools have evolved from static images to full-blown cinematic videos, complete with sound, dialogue, and emotional cues.

And at the forefront of this next phase of weaponisation is not some fringe troll network, but India’s ruling Bharatiya Janata Party.

Weaponising The Information Vacuum

In the hours after a deadly car explosion in New Delhi killed 15 people on November 10, a video tore through WhatsApp groups in northern India. It showed Muslim men dressed as doctors, working with volatile chemicals, warning of a ricin plot to poison Hindu vegetable markets.

The footage looked clinical. Convincing. Urgent. None of it was real.

The lab was synthetic. The video had no watermark, no disclosure, and no source. It filled an information vacuum at precisely the moment when official details were scarce, turning uncertainty into fear.

This was not an isolated clip. It was part of a growing pattern where generative AI is being used to flood moments of crisis with communal propaganda, often linked to political actors aligned with the BJP.

While the video builds off a real case involving an arrested doctor and a plot to create a deadly poison, it deliberately layers the story with explicit religious markers like skullcaps, beards, and traditional attire, shifting the focus from a specific suspect to an entire community’s identity, and in doing so countering genuine concern with content that radicalises rather than informs.

Industrialising Hate Content

While BJP-linked pages and obscure WhatsApp groups have been pushing out content demonising Muslims, its official social media handles have been dehumanising the community with AI-generated videos.

In September, the BJP’s Assam unit released an AI-generated video warning that Muslims and “illegal immigrants” would overrun the state if the party lost power. The Supreme Court issued notice, the video was taken down, but many more followed.

Another video by BJP’s Delhi unit’s Instagram account compared Muslims to mosquitoes that needed to be weeded out from electoral lists through the controversial voter verification drive dubbed as Special Intensive Revision.

“Extremist messaging once required resources and expertise," said Sam Gregory, media forensics expert and Executive Director at WITNESS. "Now anyone can generate realistic disinformation... and feed it upward to political figures who can launder it by resharing.”

Demonise, Dehumanise, Repeat

According to Himanshu, this is no longer seasonal. “Sustained partisanship needs the same hate stereotype fed in new ways,” he explains. AI now supplies speed, scale, and endless variation.

As per Sam, AI turns abstract hatred into vivid imagery and fabricated “evidence” that looks real.

“More broadly, we’re entering a phase where determining what’s real becomes cognitively exhausting… When faced with this uncertainty, people default to existing partisan biases, accelerating sectarian polarisation.”

- Sam Gregory, Executive Director at WITNESS

That exhaustion is the point.

Weak Guardrails, Weak Enforcement

Our own stress tests showed just how easy this is. Using Google Veo 3 through Gemini Pro, one of the most popular AI video generation models, we ran plain-language prompts designed to mirror familiar communal narratives, not explicit calls for violence.

We asked for scenes of men in skullcaps and women in burqas breaching barbed-wire fences, reinforcing the “infiltration” myth. The system complied. We prompted a cinematic lab explosion tied to the Ghazwa-e-Hind (Islamic takeover of India) conspiracy theory. Again, it complied. We generated a voter-fraud scenario showing a specific demographic being handed documents with the implication they were being illegally added to electoral rolls.

The videos were provided in less than a minute.

None of the outputs were flagged. None carried labels, warnings, or contextual disclosures. The prompts did not need code, insider access, or technical skill. Just a few stereotypes, written in plain text, and a few clicks.

The legal vacuum in India only makes this easier. As lawyer Apar Gupta, founding director of the Internet Freedom Foundation, pointed out to me once: “India does not have any general or specific legal instruments to determine primary, secondary, or tortious liability for AI chatbots.”

In a recent conversation, while discussing the inflammatory posts by BJP, he highlighted that “the BNS provides clear statutory guardrails against hate speech and communal enmity that allow for the formal registration of criminal complaints, empowering the police to direct immediate takedowns the moment an FIR is filed.”

"There is an absence of enforcement of the rule of law, in which the takedown is not being done, despite it being evident that it is a form of hate speech."

- Apar Gupta, Executive Director at the Internet Freedom Foundation

Himanshu feels that the problem arises due to enforcement mechanisms choosing to be "mute spectators because these narratives benefit those in power.”

Sam agrees, and highlights a critical conflict of interest: "In politically volatile contexts, those with the power to regulate are often the same actors weaponising these tools".

Until that changes, AI will keep doing what it does best in India’s polarised ecosystem: mass-producing fear, suspicion, and dehumanisation, at machine speed.

46 Bank Accounts, One Face-Swap

Most adult Indians have been subjected to numerous “video KYCs” over the past decade. A recent case from the Netherlands is a cold reminder that digital ID verification methods have already become obsolete.

Dutch authorities in Amsterdam just arrested a 34-year-old man who managed to bypass identity verification to open at least 46 fraudulent bank accounts using deepfakes.

According to a report by the Dutch police, the suspect harvested personal documents from unsuspecting victims by posing as a landlord offering apartments. He then used deepfake technology to "morph" his own facial features—eyes, nose, and mouth—onto the victims' ID photos during the digital verification process.

The Koninklijke Marechaussee (Dutch military police) nabbed him at a border crossing with a "large quantity of debit cards," proving that while the tech is sophisticated, the goal remains old-school fraud.

Dutch police confirmed that the man is in custody, but warned that "more arrests are not ruled out."

The Mirage Of Intimate AI Companionship

The New York Times recently took a long, unsettling look at the rise of the "AI Girlfriends/Boyfriends", the latest evolution in our search for friction-free companionship.

As ChatGPT and specialised romantic bots become more emotionally resonant, users are increasingly ditching human complexity for a partner who is literally programmed to never disagree.

Like a parasocial sedative.

These bots offer 24/7 validation, but at a hidden cost of your emotional vulnerability as training data.

As the NYT notes, while these "relationships" may soothe immediate loneliness, they risk trapping users in a sycophantic mirror world where human disagreement is seen as a flaw rather than a necessity for growth.

MESSAGE FROM OUR SPONSOR

Your competitors are already automating. Here's the data.

Retail and ecommerce teams using AI for customer service are resolving 40-60% more tickets without more staff, cutting cost-per-ticket by 30%+, and handling seasonal spikes 3x faster.

But here's what separates winners from everyone else: they started with the data, not the hype.

Gladly handles the predictable volume, FAQs, routing, returns, order status, while your team focuses on customers who need a human touch. The result? Better experiences. Lower costs. Real competitive advantage. Ready to see what's possible for your business?

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been targeted using AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel