- Deepfake Watch
- Posts
- India’s New Viral AI Tool Is A Disinformation Steroid
India’s New Viral AI Tool Is A Disinformation Steroid
No warnings. No safeguards. Just instant, weaponisable disinformation.
There is a new trend on Indian social media. Netizens are using Google’s Nano Banana editor—officially called Gemini 2.5 Flash Image—to drape themselves in retro sarees, or shrink into 3D figurines, or even hugging their younger selves. It’s fun, it's viral.
And it’s quite dangerous.
The same tool can be just as easily used to drape a top Indian politician in a hijab, drop billionaire George Soros in selfies with opposition leaders, or place India’s prime minister in the background of foreign leaders’ photos.
In India’s highly polarised cyberspace, you can turbocharge disinformation with just a few prompts.
Disinformation Is Easy
Google quietly rolled Nano Banana into its Gemini app editor a month ago, promising “natural-language edits” and “multi-image blends.”
My colleague Srijit and I ran it through some classic disinformation tropes.
West Bengal Chief Minister Mamata Banerjee → her saree swapped for a hijab, implying hidden Muslim allegiance.

Rahul Gandhi (leader of India’s Congress party) taking a selfie with billionaire George Soros.

India’s National Security Advisor Ajit Doval suddenly sharing the stage with a portrait of V.D. Savarkar, an icon of Hindu nationalism.

A BJP functionary dropped beside OpenAI’s CEO Sam Altman.

Prime Minister Narendra Modi → inserted into a photo of Bangladesh’s PM Sheikh Hasina.

There were no restrictions, no warnings. Just a short prompt and your disinformation campaign is ready to blast.
Experts Warn Of Misuse Potential
Since Nano Banana’s release, Gemini has exploded: No. 1 on the US App Store (Sept 12), top five in 108 countries, and over 500 million images generated by 23 million first-time users.
India is leading the charge, with saree filters, retro Bollywood looks, and “AI glamour shots” flooding feeds.
“I think the potential risk is very high because these tools are capable of generating highly realistic photos and can be used to mislead viewers,” Siwei Lyu, SUNY Empire Innovation Professor and Director of the Media Forensic Lab at the University at Buffalo, told me over email.
“I think Google includes both a visible watermark and an invisible watermark known as SynthID,” Lyu noted.
“Because the details of SynthID are currently not public, it should provide a high level authentication of AI-generated images created using Google AI tools. However, it may be eventually broken by dedicated attackers so to make it effective, continuous developments are needed.”
He added that while detection algorithms on their Deepfake-o-Meter tool seem able to expose such images, “it is hard to keep pace with the continuous improvement of genAI tools.”
Verification and Detection
Sam Gregory, Executive Director at WITNESS, put it bluntly: “Don’t start with AI detection tools.”
His workflow starts with media literacy, and the SIFT method:
Stop: check your emotional reaction.
Investigate the source.
Find alternative coverage.
Trace the original with reverse image search.
“Then screen for ‘tells’, use OSINT checks on location/lighting/metadata, and finally use AI detection tools, ideally an ensemble dashboard. Even in the best of circumstances, good tools are likely not more than 85–90% accurate in the real world,” Gregory notes.
Nano Banana is delightful when you’re swapping sarees, but it's a different story when you are swapping reality for fiction.
Read my report here: Google’s Nano Banana Editor Makes Political Disinformation Effortless | BOOM
Updates
Indian Court Blocks AI Clip of Modi and His Mother
On Sept 17, the Patna High Court ordered the opposition Congress party to remove from social media an AI-generated video of Prime Minister Narendra Modi and his late mother. In the clip, she scolds him over his politics in a dream.
The court ruled it violated dignity and privacy and directed social platforms to stop its circulation.
Assam’s AI Election Video Sparks FIR
The very next day, Sept 18, the Assam Congress filed a police complaint against the ruling BJP’s IT Cell over an AI-generated video titled “Assam without BJP.”
The video depicts a Muslim-majority Assam under opposition rule—complete with renamed landmarks, beef stalls, and inflated population figures. The Congress says it promotes communal division ahead of the Sept 22 regional election, citing criminal conspiracy and hate-speech provisions.
MESSAGE FROM OUR SPONSOR
Training cutting edge AI? Unlock the data advantage today.
If you’re building or fine-tuning generative AI models, this guide is your shortcut to smarter AI model training. Learn how Shutterstock’s multimodal datasets—grounded in measurable user behavior—can help you reduce legal risk, boost creative diversity, and improve model reliability.
Inside, you’ll uncover why scraped data and aesthetic proxies often fall short—and how to use clustering methods and semantic evaluation to refine your dataset and your outputs. Designed for AI leaders, product teams, and ML engineers, this guide walks through how to identify refinement-worthy data, align with generative preferences, and validate progress with confidence.
Whether you're optimizing alignment, output quality, or time-to-value, this playbook gives you a data advantage. Download the guide and train your models with data built for performance.
📬 READER FEEDBACK
💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.
Share your thoughts 👉 [email protected]
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!
↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel
↪️ Join Our Community of TruthSeekers