- Deepfake Watch
- Posts
- Deepfakes, Deregulation, and Data Royalties
Deepfakes, Deregulation, and Data Royalties
Global governance fragments as the US deregulates and India moves to monetise its data.
The US President just axed California’s AI safety rules, signaling a massive pivot toward deregulation. The EU is following suit, stripping away environmental checks to fast-track "AI Gigafactories." But as the heavyweights retreat, new localised rules are emerging: Canada is outlawing deepfake abuse, and India is enforcing a new "pay-to-train" model for its cultural data.
Trump Overrules State Laws
On Thursday, Trump signed an Executive Order titled "Ensuring a National Policy Framework for Artificial Intelligence," effectively declaring federal supremacy over state-level safety laws. Its primary goal is to dismantle the safety architecture California has spent the last year building.
The order explicitly revokes the Biden administration’s 2023 framework, removing reporting requirements for powerful AI models. In its place, the Trump administration has installed a voluntary "light-touch" regime, arguing that previous rules acted as a barrier to American innovation.
The Department of Justice is now mandated to establish an "AI Litigation Task Force" to challenge state laws that exceed federal standards.
This creates immediate legal confusion. State laws are technically valid but are now under active federal attack. This leaves compliance officers at labs like OpenAI unsure whether they must comply with California law by the January 1, 2026 deadline or ignore it based on federal orders.
Canada Criminalises Synthetic Abuse
On December 9, 2025, the Canadian federal government tabled Bill C-16, the "Protecting Victims Act." Introduced by Justice Minister Sean Fraser, this legislation addresses a critical gap: the difficulty of prosecuting deepfake pornography under existing laws where defence attorneys argue synthetic images do not depict "real" people.
The Bill modifies the Criminal Code to explicitly prohibit the non-consensual distribution of "visual representations" that appear to be intimate images. This prevents perpetrators from claiming the victim was not actually present. The penalty is severe: indictable offences now carry prison terms of up to 10 years.
Crucially, the legislation frames deepfakes not just as a privacy violation, but as a tool of "coercive control", a pattern of behaviour used to dominate a partner. By categorising synthetic media abuse under this umbrella, prosecutors can now pursue harsher sentences for perpetrators who use AI to threaten or silence victims in domestic abuse scenarios.
Unlike the US, where similar bills stalled due to free speech concerns, Canada is leveraging its criminal code to bypass those defences. This dramatically increases the liability for platforms like X and Telegram, forcing them to implement proactive detection measures or face charges for aiding and abetting.
EU Cutting "Green Tape" for AI
On Wednesday, December 10, the European Commission proposed a controversial exemption: "AI Gigafactories" and massive data centers may be excused from mandatory Environmental Impact Assessments (EIAs). This proposal, part of a broader push to "speed up" net-zero manufacturing, effectively prioritises AI supremacy over the Green Deal.
The move is a direct response to the infrastructure gap. While the US and China are rapidly building gigawatt-scale data centers, Europe is bogged down by permitting delays that can last years. By classifying AI data centers as "overriding public interest," the EU aims to bypass the lengthy ecological reviews that have historically blocked these projects.
This creates a sharp paradox. The EU, traditionally the world’s climate leader, is now legally framing energy-hungry AI infrastructure as "green tech" to force it through planning departments. It signals that in the new geopolitical reality, the Commission views technological sovereignty as a higher existential priority than strict environmental compliance.
India Proposes Mandatory Payments For Data
The Indian government is proposing a new idea: If you want to train your AI on Indian content, you must pay for it.
This week, India’s Department for Promotion of Industry and Internal Trade (DPIIT) released a working paper titled "One Nation, One Licence, One Payment." It proposes a mandatory royalty framework for AI training data, explicitly rejecting the Western concept of "Fair Use" and declaring that data is an asset that must be monetised.
The proposal introduces a statutory licensing model. Unlike the EU's "opt-out" system, this framework would grant AI developers automatic legal access to use publicly available Indian content (preventing copyright holders from blocking AI progress entirely). However, in exchange, companies must pay royalties to a centralised collection society, which will then distribute funds to creators.
This move challenges the foundational economics of Generative AI. India is positioning itself as a leader for the "Global South": nations rich in cultural data but poor in computing infrastructure. If implemented, this could encourage other nations to demand similar payments. For companies like Google, Meta, and OpenAI, this introduces a potentially massive liability if they wish to continue operating in the world's most populous market.
Industry body NASSCOM has dissented, arguing that while Big Tech giants like Google can easily afford the fee, this "levy" threatens to price smaller Indian AI labs out of using their own country's data.
Meanwhile, legal experts fear the "collection society" will become a bloated intermediary. India’s history with copyright societies (like IPRS) is riddled with allegations of opaque distribution and delayed payouts.
Also, by making the licence mandatory, the proposal effectively strips creators of their right to say "no."
MESSAGE FROM OUR SPONSOR
Learn AI in 5 minutes a day
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI
📬 READER FEEDBACK
💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.
Share your thoughts 👉 [email protected]
Have you been targeted using AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!
↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel
↪️ Join Our Community of TruthSeekers

