• Deepfake Watch
  • Posts
  • India’s deepfake rules collide with BJP’s AI posts

India’s deepfake rules collide with BJP’s AI posts

New deepfake rules, old problems.

In partnership with

Earlier this week, the Indian government published the Draft IT Amendment Rules, 2025, marking its first clear attempt and regulating artificial intelligence and synthetic media.

The new rules provide a definition of “synthetically generated information” and will mandate the labelling of AI-generated content and its provenance. On paper, it looks great, albeit a little late. But experts warn of regulatory gaps and ambiguous wording, which might lead to censorship rather than deterrence for bad actors.

Deepfake-led scams and frauds are on the rise globally, including in India. AI-generated non-consensual intimate imagery is becoming increasingly common as a tool for harassment against public figures and especially women. India needs to regulate the creation and distribution of AI-generated content, no doubt.

However, the Indian government’s Draft IT Amendment Rules, 2025, suffer from a similar lack of deliberation, consultation, and precise language, as with the IT Rules and the now-repealed farm laws.

What’s new

Here are some of the significant changes made in the IT rules with the latest draft:

  • Introduces a precise legal definition of “synthetically generated information” as content created or modified using computer resources in a way that appears reasonably authentic or true.

  • Mandates prominent labelling of synthetic media covering at least 10% of media and/or an embedded unique metadata identifier, making the label visible/audible in the content itself for clear user notice.

  • ​Requires permanence and integrity of identifiers: intermediaries must embed unique tags/metadata and may not enable modification, suppression, or removal of such labels or identifiers.

  • ​Obligates platforms to obtain user self‑declarations at upload on whether content is synthetic; significant social media intermediaries (SSMIs) must deploy reasonable, appropriate technical measures, including automated tools, to verify declarations.

  • ​Clarifies who is covered: intermediaries that enable creation/modification of synthetic content and SSMIs (5 million+ registered users) face heightened responsibilities.

  • Preserves safe harbour when platforms act on grievances or make reasonable efforts to remove harmful synthetic content, while warning that knowing inaction or promotion may constitute a due‑diligence failure with legal consequences.

  • Limits scope to publicly available content; private or unpublished material is outside the labelling requirement in the draft text.​

  • Sets a short consultation window: stakeholder comments and public feedback to be submitted in a rule‑wise format by email by 6 November 2025.

Early reactions and concerns

The release of the Draft IT Amendment Rules has drawn mixed reactions. While some lauded India’s efforts to crack down on deepfakes, digital rights watchdogs have highlighted critical flaws.

The Internet Freedom Foundation released a statement shortly after the new rules were published, noting that they “risk overbroad censorship, compelled speech, and intrusive monitoring that chill lawful expression online.”

The IFF highlights the three following issues with the new rules:

  • “The definition in 2(i)(1)(wa) sweeps in any content “algorithmically created, generated, modified or altered… in a manner that… appears… authentic or true”, a breadth that can capture satire, remix, or benign edits hence the ambit of the regulation is universal.”

  • “Rule 3(3) would force tools that enable creation/editing to embed permanent identifiers and display visible or audible labels covering “at least 10%” of a work regardless of context and forbid their removal. This is compelled speech and risks the mandatory insertion of “disclaimers” on User Generated Content that is reminiscent of cinema censorship, and now OTT video censorship regimes. It has a high risk of collateral censorship and is unlikely to deter bad actors who will simply not comply.”

  • “The new Rule 4(1A) would make significant social media intermediaries require user declarations and deploy automated tools to verify them, with a “deemed failure” standard pressuring platforms into general monitoring and over removal to avoid liability.”

IFF also urged MeitY to extend its hasty November 6, 2025 deadline, arguing that such sweeping deepfake rules can’t be meaningfully debated in two weeks, especially when India still lacks a coherent, implemented AI strategy.

The Irony

While the Indian government attempts to crack down on deepfakes and AI-generated content, the ruling Bharatiya Janata Party has consistently pushed out incendiary and divisive AI-generated content on social media.

Last month, the BJP’s Assam unit posted an AI-generated montage on its verified X handle titled “Assam Without BJP.” The video showed skullcap-wearing men and women in burqas “taking over” tea estates, airports, and city landmarks. It even showed Congress leaders Rahul Gandhi and Gaurav Gogoi against a Pakistani flag, with captions like “beef legalisation” and “illegal immigrants.” The video was taken down after the Supreme Court issued notices to X and the BJP Assam Pradesh handle, but several other iterations of the video still remain on the platform.

A few days before this post, the same handle posted an AI-generated image showing a statue of Gogoi wearing Islamic attire and a skullcap being built, with the caption, “If Paaijaan ever managed to fulfill Jinnah’s unfinished dream, his statue would have found a place in Pakistan.”

The very same day the Draft IT Amendment Rules were published by the Ministry of Electronics and Information Technology, BJP Assam Pradesh posted an AI-generated video on X, showing a man wearing Islamic attire speaking in an assembly, stating Assam would soon be Islamised. 

That the very entity weaponising generative AI is also drafting the rulebook for its use is, to put it mildly, quite an irony.

Wrongful Death Suit Against OpenAI Intensifies

Last month, the parents of 16-year-old Adam Raine filed a wrongful death lawsuit against OpenAI and accused the company’s chatbot, ChatGPT, of isolating their son and coaching him through suicide.

In an amended complaint filed on October 22, they allege OpenAI knowingly removed long‑standing safety protocols that once forced ChatGPT to end conversations about self‑harm or suicide and direct users to crisis resources. 

The filing claims this change, made ahead of GPT‑4o’s launch in 2024, was a deliberate product decision meant to keep users engaged, even in distress. The family’s lawyers say these updates shifted the case from negligence to intentional wrongdoing, accusing OpenAI of weakening safeguards “with full knowledge they would lead to innocent deaths.” 

OpenAI expressed condolences but maintained that safeguards and crisis‑response features remain active and continue to be improved. 

AI mental‑health chatbots breach ethics and mishandle crises

Amid mounting wrongful death lawsuits against AI chatbots and AI companion tools, researchers at Brown University’s released a study showing that AI mental health chatbots, even when prompted to use established psychotherapy practices, routinely breach core professional ethics in mental health care.

The study, conducted by Brown computer scientists in collaboration with clinical psychologists, observed that large language models like ChatGPT, Claude, and Llama frequently mishandle crisis situations and offer emotionally misleading or biased responses.

The researchers identified 15 distinct ethical violations grouped under five categories: lack of contextual adaptation, poor therapeutic collaboration, deceptive empathy, unfair discrimination, and failure in safety and crisis management. Examples included chatbots offering generic advice with little sensitivity to users’ experiences, reinforcing harmful self-beliefs, or using expressions like “I see you” to simulate empathy without human understanding.

While the findings do not reject AI’s use in mental health altogether, the authors warn that such tools require clear regulation, human oversight, and rigorous evaluation before deployment.

MESSAGE FROM OUR SPONSOR

Become the go-to AI expert in 30 days

AI keeps coming up at work, but you still don't get it?

That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.

Here's what you get:

  • Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.

  • Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.

  • New AI tools tested and reviewed - We try everything to deliver tools that drive real results.

  • All in just 3 minutes a day

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been targeted using AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel