• Deepfake Watch
  • Posts
  • Convert A Tragedy To Clicks With The Latest AI Filter

Convert A Tragedy To Clicks With The Latest AI Filter

Life in dystopian times.

On 22 April 2025, a group of armed extremists stormed the paradisiaque meadows of Kashmir’s Baisaran Valley, and killed 28 civilians - most of them unsuspecting tourists enjoying some time off.

Soon, an image from the aftermath of the attack hit the internet, showing Himanshi—a grieving soon-to-be widow of Naval officer Lt. Vinay Narwal—sitting in shock over the wounded body of her husband.

And then, a large number of social media users somehow thought it’ll be a good idea to put that cool and trendy Ghibli filter—that turns everyday mundane banality into the most kawaii cuteness max—and share it on social media for better engagement.

This image was soon picked up by a plethora of accounts on social media—including an official account of India’s ruling party BJP—and shared with dramatic effects like excessive blood and facial emotions, and Bollywood-esque action scenes.

If you are baffled by this, and think it is insensitive and preposterous—good news, you are still normal!

Grief Gets Ghiblified

Thankfully not everyone was down with this trend.

Responses to BJP Chattisgarh’s post on X

BJP Chattisgarh’s post containing a ‘ghiblified’ version of the tragic photo got a tremendous amount of flak, with most people finding it insensitive. But, what did the account admin do? They deleted the post, and reposted it again–this time with some hashtags.

And unfortunately, while many were not cool with this, there were too many that jumped on this trend.

Some people must have thought that the original photo of the tragic moment was boring. So they used AI to create other—more realistic—versions, with excess blood, excess facial expressions, and action-packed scenes. 

This is as dystopian as it can get–an unreleased episode of Black Mirror’s latest season.

Discussing this dramatisation of tragedy, cyber psychologist Nirali Bhatia told me that “in this hyperconnected world, such behaviour shows that grief has become performance-driven.”

Bhatia feels that this stems from the emotional nature of mass trauma, where critical thinking goes for a toss.

“These AI-generated images heighten emotional cues, making us feel a stronger connection to the event without considering relevance or factuality,” she notes.

India’s AI Use Hype

India is going gaga over AI–everyone seems to be using AI for everything (like putting anime filters on tragic moments). Governments—central or regional—want AI to decide on social benefits, they want AI to supercharge policies and they want AI to remove poverty once and for all.

A recent article by researcher Mila T. Samdub at the University of Pennsylvania’s Center For Advanced Study Of India looks into India’s positioning as the "AI use case capital of the world,” and argues that such a “use cases” approach ignores the realities of India’s political economy. 

According to Samdub, like the digitisation of India’s governance, its precipitous push for AI serves powerful interests: legitimising the BJP-led government’s technocratic image, providing pathways for tech giants like Microsoft and Amazon to legitimise their activities in India, and allow development funders like the Gates Foundation to seek new digital interventions. But the common folks gain little, and are left with Ghibli filters and AI-generated variations of good morning messages.

Samdub argues that AI startups in India are more keen on pleasing enterprise clients than tackling development challenges, with 70% of generative AI startups providing solutions exclusively for businesses.

What would the alternatives be? According to the article, common folks should be treated as AI owners and producers, rather than mere end-users and data sources.

Latest On Deepfakes And AI

Was this forwarded to you?

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel