A Nuclear Rivalry Enters The Deepfake Era

The India–Pakistan rivalry is being reshaped by AI-driven disinformation.

In partnership with

Hours after an Indian Air Force pilot died when his Tejas fighter jet crashed at the Dubai Air Show last month, a video surfaced on X showing Indian Air Chief Marshal AP Singh chastising the government for inducting indigenous jets. 

This clip wasn’t real, but a deepfake. And it was one of many that fact-checkers at BOOM had debunked since India’s military conflict with Pakistan in May.


We gathered these posts, mostly originating from X, and stumbled upon a coordinated disinformation campaign targeting India since the conflict. This marks a new frontier of conflict between the two nuclear-armed neighbours.

A Relentless Operation

A cluster of X accounts, with metadata revealing Pakistan as their origin, has been pumping out a mix of AI-generated videos, fake quotes, bogus letters, and fabricated news articles at a relentless pace. 

BOOM has published more than 30 fact-checks flagging synthetic media since the India-Pakistan conflict earlier this year. The campaign bears the hallmarks of a troll farm: synchronised activity, rapid amplification by seemingly unrelated accounts, and fake personas designed to mimic Indian users.

Key accounts driving the operation include @InsiderWB, @Baba_Thoka, @Hawkss_eye, and @abubakarqassam—names that appear repeatedly across Indian fact-checkers' debunks. Their disinformation is amplified by a network posting pro-Pakistan Army and PML-N rhetoric. 

Some accounts, like @akrittisharma, @TaraSharma02, and @Kussikhuelafn, attempt to pass off as Indian users, with profiles decked with tricolour flags and Indian political slogans. But X's own metadata exposes their Pakistani origins. Central accounts periodically purge their posts, wiping the trail and making detection harder. Many have since been withheld in India or suspended after being reported.

The narratives are strategic: fake admissions of tactical missteps, fabricated casualty figures from the conflict, bogus statements claiming the government is communalising the armed forces, attacks on the military's secular credentials, and conspiracy theories framing the Red Fort blast in Delhi as a "false flag." The operation also exploits internal fault lines, such as religious polarisation, caste prejudice, and tensions in Manipur and Ladakh.

​Top defence officials, including the Indian Army Chief, Navy Chief, Air Force Chief, Chief of Defence Staff have been consistent targets. 

"Whenever they made any statement, we would always know…this is going to be manipulated," Pamposh Raina, head of the Deepfakes Analysis Unit at the Trusted Information Alliance, told BOOM. The reason? Post-Operation Sindoor, military leaders have become more visible in the media.

"Their visibility is higher…as a result, manipulation becomes easier," Raina explained. She also noted that some AI voice clones used Urdu words, and that insignia and name tags on military uniforms showed telltale signs of AI-led distortion.

Ever since Elon Musk took over Twitter (and changed the name to X), the platform has become a fertile ground for disinformation campaigns and influence operations. By paywalling both TweetDeck and its API, the platform has significantly impeded researchers and journalists seeking to track and collect evidence on influence operations.

The AI Rat Race In Chaos

rats GIF

Giphy

Sam Altman has reportedly declared “code red” at OpenAI. 

Not because of mounting allegations of AI psychosis from prolonged use of ChatGPT, but because its biggest rival appears to be leaving it behind. 

Google dropped Gemini 3 last month, and promptly topped multiple independent leaderboards, scoring higher on benchmarks than ChatGPT’s top models. According to a post on X by a former Google employee, Deedy Das, OpenAI has lost around 6% of its traffic since the launch of Gemini 3. 

Human casualties are collateral damage, but loss of market share is unacceptable.

According to reports, Altman has now paused a new ad feature on ChatGPT along with AI agents, to improve ChatGPT. The vibe has shifted, as we see the AI leader scrambling to defend the top spot.

Sam Altman Pixel GIF by PEEKASSO

Gif by peekasso on Giphy

Google does have a ton of personal data, and it has been trying to push AI down its users’ throats in the form of a hyper-personalised user experience. The company is trying to embed Gemini across Search, Android, Chrome, and Workspace, while its parent company Alphabet is betting big on AI chips.

Anthropic is betting on its "safety-first" approach, to please those spooked by rogue AI behaviour. Claude Opus 4.5 carries Anthropic's highest safety classification (ASL-3), and the company continues to publish unusually detailed transparency documentation about its models' capabilities and risks.

Meanwhile, Mistral AI launched Mistral Large 3 in January with 675B total parameters, a 256k context window, and native multimodal capabilities, all released under Apache 2.0. By late 2025, it had been integrated into Microsoft Azure, Amazon Bedrock, and NVIDIA's stack, making it one of the most widely deployed open-weight frontier models globally, and Europe’s biggest contender in the AI race.

And then there's DeepSeek. Its newly released DeepSeek-V3.2 is a 671B-parameter open-weight model reportedly trained for $6 million, against  the $100+ million typically spent by US-based counterparts. A recent study found that China now leads the US in open-model download share: 17.1% to 15.8%, driven largely by DeepSeek and Alibaba's Qwen.

MESSAGE FROM OUR SPONSOR

Learn AI in 5 minutes a day

This is the easiest way for a busy person wanting to learn AI in as little time as possible:

  1. Sign up for The Rundown AI newsletter

  2. They send you 5-minute email updates on the latest AI news and how to use it

  3. You learn how to become 2x more productive by leveraging AI

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been targeted using AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel