OpenAI Doubles Down On Deepfake Slop Arms Race

Not another generative AI tool.

In partnership with

A few days ago, a video popped up on social media showing OpenAI CEO Sam Altman shoplifting a GPU from a retail store. The video was made using Sora 2, a state-of-the-art video generation model which can generate entire videos with sound, along with the likeness of real people.

The company also announced the launch of a Sora app, where people can share video content, similar to TikTok, using the generative features.

While generative AI fans are hailing this launch, experts warn that it will likely exacerbate the growing cases of AI-led crimes, and the abundance of AI slop.

What's Different With Sora 2

Sora 2's predecessor, Sora, created high-quality videos with prompts but had certain limitations—such as temporal inconsistencies and physical impossibilities showing up in videos—which made their synthetic nature apparent. These videos were also silent.

According to OpenAI, Sora 2 can generate dialogues, sound effects, ambient noise and music, further blurring the lines between real and AI-generated videos.

Google's VEO 3, which launched earlier this year, has similar features. But where Sora 2 really stands out is in its 'cameo' feature, which allows users to insert their digital personas—and that of others—into videos.

The Sora App

Another distinctive feature is the new TikTok-style Sora app, which is currently available to iOS users through invitation. According to OpenAI, this app will allow users to generate, share and remix short AI videos with synchronised audio.

As part of its initial safeguards, Sora 2 blocks generations that include real people other than users who have provided cameo consent, and it does not support public figure generation at launch.

OpenAI's system card says Sora 2 was trained on diverse datasets including public internet data, licensed data, and user-provided or researcher-generated data. It has, however, failed to provide further details on the training datasets.

In March 2024, former OpenAI CTO Mira Murati told the Wall Street Journal that the company "used publicly available and licensed data" to train its video model, but failed to provide clarity on whether social media videos were used by the company as well.

A recent investigation by the Washington Post revealed that Sora 2 can closely replicate TikTok videos, scenes from Netflix shows like Wednesday, video games like Minecraft, along with animated logos for Warner Bros., Dreamworks and other major production companies.

According to the report, "Sora's ability to re-create specific imagery and brands suggests a version of the originals appeared in the tool's training data, AI researchers said."

Another report by the Wall Street Journal highlights that OpenAI plans to use copyrighted material to generate videos, unless copyright holders explicitly opt out of having their work used to generate content.

Reuters reported that Disney has already opted out of allowing its content to be used to train Sora 2, but not much is known about what regular users can do to protect their likeness from being used without their consent once its accessible to the public.

While the company claims to have guardrails to prevent the misuse of its flashy new tool, Washington Post reporter Drew Harwell tested it out and found plenty of opportunities to create misleading content.

Sharing Harwell's video on LinkedIn, Sam Gregory, Programme Director at WITNESS, wrote, "There's a chilling casualness, and sense of their own playful discovery at the expense of others, to the way OpenAI has released Sora2—enabling easy appropriation of likenesses and simulation of real-life events."

Other Updates

Australian Court Issues $344K Fine for AI Deepfake Pornography Creation

Last week, an individual named Anthony Rotondo was fined $343,500 by the Federal Court of Australia for creating and distributing 12 AI-generated pornographic deepfakes of six women without consent between late 2022 and October 2023. This marks Australia's first major financial penalty for deepfake abuse.

Your Conversations With Meta AI Assistant Will Power Targeted Advertising

Starting from 16 December 2025, your conversations with Meta AI assistant will be used to personalise content and ads across Facebook and Instagram, the company announced on Wednesday. Users cannot opt out, though the company stated it won't use conversations about sensitive topics like health, politics or sexual orientation for ad targeting.

Minnesota Newsroom Uses AI To Unravel Mass Shooter's Manifesto

After a Minneapolis church shooting killed two children in August, local news outlet the Star Tribune used ChatGPT to translate nearly 200 pages of the shooter's faux-Cyrillic journals in hours—a task that would've taken weeks. While the chatbot hallucinated critical details, Russian-language experts were able to catch these errors. This highlights that while AI can speed up grunt work, human-level verification remains indispensable in journalism.

MESSAGE FROM OUR SPONSOR

The AI Insights Every Decision Maker Needs

You control budgets, manage pipelines, and make decisions, but you still have trouble keeping up with everything going on in AI. If that sounds like you, don’t worry, you’re not alone – and The Deep View is here to help.

This free, 5-minute-long daily newsletter covers everything you need to know about AI. The biggest developments, the most pressing issues, and how companies from Google and Meta to the hottest startups are using it to reshape their businesses… it’s all broken down for you each and every morning into easy-to-digest snippets.

If you want to up your AI knowledge and stay on the forefront of the industry, you can subscribe to The Deep View right here (it’s free!). 

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been targeted using AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel