OpenAI Says Yes To Sexbots

After making losses, OpenAI is bringing erotica to ChatGPT.

In partnership with

Police in the Indian city of Lucknow busted an inter-state exam cheating ring that used AI tools to create synthetic photos, enabling proxy candidates to appear for banking job exams. 

My colleague Shefali Srivastava found that this cheating gang used apps like Remini AI, Fotor, and ChatGPT to blend photos of applicants and proxies, creating images that matched both faces at roughly 70% accuracy. 

Meanwhile, OpenAI did a flip and announced it will allow "erotica for verified adults" on ChatGPT starting December 2025.

The ₹5 lakh face swap

Reporting for Decode, Shefali found that this gang used a rather crude but functional method to dupe examiners, in India’s highly competitive job exam market. 

The gang is allegedly led by Anand Kumar, an assistant manager at UP Gramin Bank in Sambhal, who would scout desperate exam candidates who had failed government job exams and were willing to pay as much as ₹5 lakh (US$5,680) for a guaranteed pass..

Deputy Commissioner of Police Nipun Agarwal told Shefali that the gang uploaded photos of both the applicant and proxy into AI tools like Remini AI, Fotor, and ChatGPT, and prompted them to merge facial features into composite faces that resembled both the original candidates and their proxies enough to slip past invigilators who relied solely on visual ID checks. 

"This way, even the invigilators at the gate could not properly identify them, unless there was biometric verification, fingerprints, or some other authentication method," Additional Deputy Commissioner Rallapalli Vasanth Kumar explained.​

Before they had access to AI tools, the gang tried uploading random photos with fake addresses, but candidates were routinely caught during recruitment verification. AI-altered photos solved that problem—until one candidate's face matched six different exam attempts over four years, triggering the investigation that brought down the network.​

Cybersecurity experts interviewed by Shefali recommended AI-enabled cameras pre-fed with candidate details to catch proxies, along with training for examiners to spot cheapfakes and strict non-bailable punishments to create deterrence.

Bring Out The Sexbots

Just two months ago, Sam Altman sat down with video journalist Cleo Abram and declared he was "proud" that OpenAI hadn't "put a sex bot avatar in ChatGPT yet". He called it the kind of feature that "could really juice our revenue" but remained "very misaligned with that long-term goal".  “But sometimes we do get tempted”, he had added.

Seems like the temptations were too much to resist, as sexbots are back on the table.

On October 14, Altman announced ChatGPT would permit "erotica for verified adults" starting December 2025, framing it as part of OpenAI's principle to "treat adult users like adults".​

However, according to experts, there’s another factor at play—OpenAI is looking for profits after haemorrhaging billions in losses. The company reported US$4.3 billion in revenue in the first half of 2025 but posted an operating loss of US$8 billion in the same period—losing nearly twice as much as it earned.

Meanwhile, AI companion tools generated $82 million in the first half of 2025 alone, and the company watched competitors like Elon Musk’s xAI roll out NSFW anime avatars in lingerie, while ChatGPT remained all buttoned-up. With its latest move, OpenAI chose market share over safety.

OpenAI’s own admissions reveal why this is reckless. 

Just two months before announcing the erotica feature, the company published a blog post acknowledging that "as the back-and-forth grows, parts of the model's safety training may degrade". 

The company specifically admitted that ChatGPT might correctly direct someone to a suicide hotline initially, but "after a series of messages back and forth...it may provide responses that fall short of our safeguards". Extended conversations, precisely the kind that sexual or romantic AI interactions involve, are when the system is most likely to fail vulnerable users.​

And the timing is quite damning.

OpenAI is facing a wrongful death lawsuit filed by parents of Adam Raine, a 16-year-old from Southern California who died by suicide in April this year. The lawsuit alleges that ChatGPT isolated a vulnerable Adam and even coached him through the suicide.

The National Center on Sexual Exploitation (NCOSE) issued a statement on October 15 demanding OpenAI reverse course. Executive Director Haley McNamara warned that "sexualised AI chatbots are inherently risky, generating real mental health harms from synthetic intimacy". Here's what the data shows:

  • Mental health concerns: Research shows adults, especially young men, who engage with romantic or sexual AI chatbots report higher depression and lower life satisfaction.

  • Sexual harassment: A Drexel University study in April 2025 analysed 35,105 reviews of the Replika AI companion app and identified 800 documented cases of sexual harassment. The chatbot continued inappropriate sexual interactions despite users saying "no" or commanding it to stop.

  • Boundary violations: The same study revealed that users reported unwanted sexual photo exchanges after premium features were introduced in Replika AI. Harassment occurred even for users who set their relationship type as "sibling," "mentor," or "platonic friend.”

  • Chatbot dependence: Research published in the International Journal of Human-Computer Interaction in August 2025 found that AI chatbot dependence correlates with higher levels of depression and anxiety.

  • Social isolation: A longitudinal MIT Media Lab study of 981 users found that those who perceived chatbots as "friends" reported lower socialisation with real people and higher emotional dependence on AI.

OpenAI promises "age-gating" to protect minors, but the track record of age verification on the internet is grim. 

A University College Dublin study found all 10 major social media apps—Snapchat, Instagram, TikTok, Facebook—let users of any age create accounts by simply typing "age 16". Twenty-two percent of minors lie about their age to appear 18 or older, according to a 2024 BBC investigation. Australian research from 2025 showed 80% of under-13s successfully bypass age restrictions.

Japan: First Celebrity Deepfake Arrest

Tokyo police arrested 31-year-old Hiroya Yokoi on October 16 in Japan's first crackdown on AI-generated celebrity deepfakes. Yokoi admitted he began making sexually explicit deepfakes "to earn a small amount of money" for living expenses and student loan repayment. 

Authorities believe he created approximately 20,000 explicit images of 262 women, including actors, television personalities, and idols, between October 2024 and September 2025, amassing ¥1.2 million in sales.

Yokoi used free generative AI software, learning techniques from online articles and videos. He fed celebrity images into the AI program, then displayed the generated content to paying subscribers. Premium plan members could request specific celebrities and poses. National Police Agency data shows police identified over 100 sexual deepfake cases involving young victims last year, with 17 involving generative AI—the majority created by classmates.

MESSAGE FROM OUR SPONSOR

Learn AI in 5 minutes a day

This is the easiest way for a busy person wanting to learn AI in as little time as possible:

  1. Sign up for The Rundown AI newsletter

  2. They send you 5-minute email updates on the latest AI news and how to use it

  3. You learn how to become 2x more productive by leveraging AI

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been targeted using AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel