Belling The AI Cat: Part II

Exactly a year ago, I wrote an issue of Deepfake Watch titled “Belling The AI Cat,” on a highly confusing knee-jerk advisory to regulate AI in India, by former Minister Rajeev Chandrashekhar.

The advisory was released shortly after someone posted a response by Google’s Gemini AI to the question, “Is Modi a fascist?” It required tech companies to seek approval from the Indian government before releasing AI tools that were “unreliable” or “under-tested,” and led to massive confusion. It was eventually rolled back.

We are giving yet another try at regulating AI. The Indian Ministry of Electronics and Information Technology (MeitY) appointed a subcommittee on AI Governance Guidelines Development, chaired by the Principal Scientific Adviser, which released a draft report in January, seeking public feedback. It proposes a principle-based approach built on eight core pillars:

  1. Transparency – AI systems should disclose their processes and outputs

  2. Accountability – Developers and deployers bear responsibility for AI outcomes

  3. Safety & Robustness – AI must resist errors and potential misuse

  4. Privacy & Security – AI should comply with data protection regulations

  5. Fairness & Non-Discrimination – AI must not perpetuate or amplify biases

  6. Human Oversight – AI decisions should remain subject to human intervention

  7. Inclusive Innovation – AI benefits should extend across all sectors and communities

  8. Tech-Driven Governance – AI regulation should leverage RegTech tools

The draft also proposes an inter-ministerial committee for regulatory oversight coupled with voluntary commitments from AI developers.

Stakeholders Highlight Lack Of Enforcement, Blind Spots

Various civil society groups and industry bodies have highlighted significant gaps in this approach.

The Business Software Alliance (BSA), representing tech giants like Microsoft, Adobe, and IBM, supports a risk-based framework in principle but cautions that vague voluntary commitments without clearly defined responsibilities could create a regulatory quagmire rather than clarity.

The Internet Freedom Foundation (IFF) pushed for greater transparency and improved accessibility in the public consultation process, while pointing out the disconnect with previous AI policy initiatives. Their critique cuts to the heart of the matter: principle-based approaches accomplish little without concrete enforcement mechanisms. As they put it, these "ideals are valuable" but must be underpinned by binding rules to have any real effect.

Meanwhile, the Software Freedom Law Center (SFLC) highlighted a critical blind spot in the current framework—the environmental footprint of AI development, particularly the massive energy demands of data centers. They've recommended additional principles to "encourage energy-efficient algorithms, renewable energy for data centers, and tools to measure and reduce the carbon footprint of AI development and deployment."

A Recap of Global AI Regulations

Many other countries have gone ahead with regulatory measures, while some have chosen the deregulatory approach. Let’s take a brief look at some of these approaches, and their current status.

Region/Country

Key Regulations

Implementation Timeline

Distinctive Features

Current Status

European Union

EU AI Act

Ban on prohibited AI (Feb 2025); GPAI obligations (Aug 2025); Full implementation (2027)

Risk-based classification; Strict penalties up to €35M or 7% of turnover

Partially in force; Staggered implementation

United States (Federal)

Executive Order "Removing Barriers to American Leadership in AI"

Signed January 23, 2025; New action plan due within 180 days

Deregulatory approach; Focus on innovation over restrictions

Active; New policies developing

United States (State)

Various state laws (e.g., Virginia's High-Risk AI Act)

Ongoing throughout 2025

Consumer protection; Sector-specific regulations; Transparency requirements

Hundreds of bills under consideration

South Korea

Basic Act on Artificial Intelligence and Creation of a Trust Base

Signed January 21, 2025; Takes effect January 22, 2026

Comprehensive definitions of AI concepts; Oversight for high-impact AI; Penalties up to KRW 30M and imprisonment

Enacted; Implementation regulations in development

ASEAN

ASEAN Guide to AI Governance and Ethics

Released February 2024; Philippines to propose framework in 2026

Seven principles: transparency, fairness, security, reliability, human-centricity, privacy, accountability; Non-binding guidelines

Active; Further development expected

African Union

Continental AI Strategy

Approved July 2024; Implementation 2025-2030

Development-focused approach; Five focus areas: benefits, capabilities, risks, investment, cooperation

Preparatory phase; Implementation beginning

South Africa

National Artificial Intelligence Policy Framework

Issued August 2024

Ethical guidelines; Privacy and data protection; Safety and security; Transparency; Fairness

Initial framework established; Comprehensive policy in development

China

Interim Measures for Generative AI; Various technical standards

Generative AI Measures (Aug 2023); Advanced filing system operational

Mandatory filing for AI models; Content supervision; Dual innovation-control approach

Active; Comprehensive AI Law expected in 2025

India

AI Governance Guidelines Development report

Public consultation ended February 2025

Whole-of-government approach; “INDIAai Mission with Safe and Trusted AI pillar”

Developing; Focus on guidelines rather than new legislation

Canada

Bill C-27 (Artificial Intelligence and Data Act)

Died on order paper January 2025

Would have been first national AI regulatory scheme

Stalled; No federal AI law likely in 2025

Latin America

Various national initiatives; Peru AI Law

Peru law passed July 2023; Others in development

Mostly declarative rather than actionable; Influenced by Global North frameworks

Nascent; Many bills under discussion in multiple countries

United Kingdom

Activity-based regulatory approach

Individual regulators' approaches published (April 2024)

Pro-innovation philosophy; Existing regulators empowered for domain-specific oversight

Active; £100M invested in regulatory capacity

Quick updates

SBI warns of scams using deepfakes of its top management

The State Bank of India issued a public caution over the rising number of deepfakes impersonating its top management, being used to conduct fraudulent activities.

South Korean Deepfake Porn Ring Shifts Out Of Telegram

Following a crackdown of Telegram groups running a massive non-consensual deepfake pornography ring in South Korea, many of the perpetrators have now switched to other encrypted messaging platforms. 

Was this forwarded to you?

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel