The Great Undressing

Think your photos are safe? Grok doesn't.

In partnership with

Nudification tools have spent years lurking in the shadows of the internet. They were the "dirty secret" of niche Telegram channels and sketchy subreddits, and their creators had to hide their identities and duck regulators and authorities.

Not anymore. Elon Musk’s Grok AI has officially dragged them into the town square.

Now anyone willing to pay $30 for a Grok subscription can indulge in some of their worst impulses.

Musk, clearly showing his level of maturity and leadership, decided to handle the mounting reports of sexual harassment by sharing a picture of himself in a bikini, seemingly making light of a tool that is currently being used to violate the dignity of thousands of women and minors.

But this wasn't an unforeseen accident. The Midas Project, an AI oversight organisation, revealed this week that they warned xAI back in August 2025 that Grok was "essentially a nudification tool waiting to be exploited."

Musk ignored them.

The Horror in Plain Sight

The public availability of this tool has brought out the absolute worst in humanity. Since the "edit image" button launched in late December 2025, allowing users to change existing images, the tool has been used relentlessly. An analysis by Reuters recorded 102 attempts to undress women in just 10 minutes at peak usage.

Children were not spared. 

Reports indicate that users have been using Grok to generate sexualised and nude images and videos of minors. The "spicy" filters, it seems, have no moral floor or legal limits. 

Neither were the dead. 

Eliot Higgins, the founder of Bellingcat, posted on Bluesky about how users have been using Grok to "undress" the dead body of Rene Nicole Good, a victim of an ICE shooting in Minneapolis. 

This level of digital necrophilia and desecration was previously unimaginable on a mainstream social platform.

"Put Her In A Bikini"

My colleague Hera Rizwan at Decode recently went inside this nightmare to see how easy it is to weaponise these prompts. She followed the story of "Nisha" (name changed), an Indian woman who posted a critique of the nudification trend, only to become its next victim.

A perpetrator took Nisha’s only uploaded photo, her display picture, and used the blunt prompt: “Put her in a bikini.” 

Within seconds, Nisha was "almost stripped" on screen, her digital likeness violated and shared in a reply thread for everyone to see. When she reported the image to X, the platform claimed it didn't violate their rules. When she reached out to the perpetrator directly, she was met with a chilling arrogance: “Wait till you find out it’s not that easy... even if they find me, what will they do.”

Hera’s investigation found that the perpetrator's timeline was a "catalogue of misogynistic and vulgar posts," documenting at least five other instances where he used the "Put her in a bikini" prompt on different women. 

Accountability: 404 Not Found

As the world burns, Musk’s response has been characteristically dismissive. Instead of rolling back the tool or implementing hard blocks on biological human image generation, he gave this vague answer on X:

"Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content."

Translation: "It’s legal until it’s not." 

Global Regulatory Firestorm

The rest of the world isn't laughing at the bikini memes. In the past week, regulators across the globe have launched a coordinated assault on X.

United Kingdom: On January 5, Ofcom formally contacted X regarding Grok’s compliance with the Online Safety Act. This is the first time these powers have been used against generative AI, and it could lead to fines of up to 10% of global revenue. 

France: The Paris public prosecutor has expanded an existing investigation into X to now include charges of child pornography, opening the door to criminal liability for xAI executives. 

Australia: The eSafety Commissioner has launched an investigation into Grok following a doubling of reports of image-based abuse since late December 2025. Commissioner Julie Inman Grant has made it clear that the platform’s 'Edit Image' tool is being weaponised for harassment, stating that existing safety codes apply "whether it’s AI-generated or not." 

United States: While federal regulation lags in the United States, Texas has entered the chat. The Texas Responsible AI Governance Act (TRAIGA) became enforceable on January 1, making it the first US state law to specifically criminalise AI-generated non-consensual intimate images. This law has real teeth for this specific "nudification" crisis because of its unique focus on intent and developer liability.

India and Malaysia Join the Fight

India's Ministry of Electronics and IT (MeitY) and Malaysia's Communications Ministry opened parallel investigations on January 6.

In India, the situation is escalating quickly. MeitY has informed X that its response regarding Grok’s obscene content is "not adequate." The government is currently weighing further action as the 72-hour window for a "proper" explanation closes.

What is X actually saying?

While the official stance from Musk is ambiguity, Grok issued a public acknowledgment of "failures in safeguards" on X.

However, xAI has been telling regulators that they are "urgently fixing" vulnerabilities, while the tool continued to undress women and minors.

It is the classic Big Tech dance: ship a dangerous product, ignore the safety warnings, wait for the harm to happen, and then offer "thoughts and prayers" while the subscriptions keep rolling in.

THINK TWICE about the pictures you post publicly. Remove them if you can.

MESSAGE FROM OUR SPONSOR

Learn AI in 5 minutes a day

This is the easiest way for a busy person wanting to learn AI in as little time as possible:

  1. Sign up for The Rundown AI newsletter

  2. They send you 5-minute email updates on the latest AI news and how to use it

  3. You learn how to become 2x more productive by leveraging AI

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been targeted using AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel