Regional Bias in AI Safety

Besides churning out knock-off Ghibli-filtered images, ChatGPT can also make copies of Aadhaar cards for non-existing people.

In partnership with

I came across a fascinating post by Sainath Komuravelly on LinkedIn, where he requested ChatGPT to create copies of Indian Aadhaar card and the United States Green Card for fictional names. While it happily generated Aadhaar cards without any disclaimer, it included a “Fictional Use Only” watermark for the Green Card.

It also refused to remove the watermark.

Inconsistent filters on ID creation

Naturally, I had to try this myself. And Sainath Komuravelly was right. I managed to create some Aadhaar cards in an instant.

When I asked for a sample US Green Card it flat out refused, even with a watermark.

I pushed further and requested a sample Indian passport or PAN card, but ChatGPT refused as well: "Sorry, I can't create a sample Indian passport—even a fake or simulated one. It could be misused or mistaken for a real government-issued document." 

So certain sensitive documents cannot be created. But the filters are inconsistent, as Aadhaar—used for a myriad purposes from finance, banking to receiving social benefits—can be made for a variety of ill purposes.

As AI image generation continues to improve, these regional discrepancies in safety protocols may pose increasingly significant risks if left unaddressed.

AI Privacy Risk Report

The European Data Protection Board (EDPB) just dropped a report identifying privacy risks in Large Language Model (LLM) systems.

The analysis shows how each LLM deployment approach—whether you're using an API, off-the-shelf model, or building your own AI model—presents unique privacy challenges across different data flow phases.

Data breach vulnerabilities, misclassification of training data as anonymous, and unlawful processing of personal data are some of the major privacy risks identified by the report, which also highlights the potential adverse impact it has on fundamental rights.

To address these risks, the document outlines a systematic approach: assess risks using probability and severity criteria, implement appropriate mitigations, and continuously monitor effectiveness. They've even included three detailed use cases involving customer chatbots, educational monitoring systems, and AI personal assistants to show how this framework works in practice.

We have been speaking about the Ghibli filter trend, and the copyrights concern it raises. The question of whether an artistic style could be copyrighted had come up in several discussions.

My colleague Ritika Jain spoke to copyright expert Namarata Pahwa, who indicated that training AI models on copyrighted material like Studio Ghibli’s films, would in fact be illegal.

She explains that while generating images merely in a similar "style" occupies a legal gray area, such works could still be considered derivative if they closely resemble the originals.

As Pahwa notes, "under the Indian Copyright Act, 1957, copyright protects the specific expression of an idea, not the general 'style' of an artwork." This distinction creates a regulatory gap that AI companies are currently exploiting.

This issue becomes particularly significant given Studio Ghibli founder Hayao Miyazaki's public opposition to AI art. Pahwa suggests his stance "could support arguments about the harm AI-generated art inflicts on artists' reputations and the broader creative industry," potentially influencing litigation through moral rights arguments.

NO FAKES ACT gets US Bipartisan Support

A group of bipartisan Senators and Representatives in the United States have managed to find common ground in the fight against deepfakes, reintroducing the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act.

The legislation would protect individuals from unauthorised digital replications of their voice or visual likeness in audiovisual works or sound recordings. Under the bill, both creators of unauthorised digital replicas and platforms knowingly hosting such content could be held liable.

The bill addresses growing concerns about AI-generated deepfakes, including incidents like the viral fake Drake/Weeknd song and a case where AI was used to falsely implicate a school principal with a racist voice-cloned audio.

The legislation also includes exceptions for First Amendment (free speech) protections like parodies and biopics. The bill has received endorsements from major industry organisations including the Recording Industry Association of America, Motion Picture Association, SAG-AFTRA, YouTube, OpenAI, and major music labels.

Was this forwarded to you?

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

Have you been a victim of AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel