- Deepfake Watch
- Posts
- AI Slop Can Speak Now
AI Slop Can Speak Now
And it's brilliant, annoying and scary.
Last week, the internet was flooded with highly realistic videos of… AI slop. All thanks to Google’s Veo 3—a newly released video capable model on Gemini.
Unlike OpenAI’s Sora, Veo 3 can add dialogues and sounds, much to the dismay of creative professionals. For those worried about Veo 3-led hyperrealistic videos spreading disinformation, keep reading to know how you can use Google’s SynthID detector to spot these videos.
NYT Closes AI Licensing Deal With Amazon
After a series of lawsuits by media companies around the world, AI giants are now teaming up with these media outlets for training data agreements.
New York Times, which recorded a US$4.4 million in pretax litigation costs in its first quarter with regards to its 2023 copyright lawsuit against OpenAI and Microsoft, has now made an AI licensing deal with Amazon over its editorial content.
Last year, OpenAI also announced a slew of deals—even as it faced the NYT lawsuit—with News Corp, Axel Springer, The Atlantic, Vox Media, Condé Nast, Dotdash Meredith, Financial Times, Le Monde, Prisa Media, Future plc, and The Associated Press.
Similarly, Reuters licensed its editorial content to Meta AI.
These deals signify a potentially new revenue stream for news publishers amid declining revenue from ads, while establishing AI chatbots as significant distribution channels.
However, most of these deals are not entirely transparent—the nitty-gritties are seldom made public. Also, it’s not certain whether these deals would end the ongoing feud between news publishers and AI companies around the world, as many of the lawsuits (including that of NYT vs OpenAI/Microsoft) still remain unsettled.
Can SynthID assuage Veo 3 woes?
Google DeepMind CEO Demis Hassabis shared a video of onions being sauteed on a pan. This video wasn’t real.

Soon the internet was flooded with more such fake-but-realistic Veo 3 creations, from a fake comedy show, to a fake automobile expo, to fake professors giving fake lectures.
There was this video of a man taking us on a tour of ancient Rome, which showed men and women idling around, giggling and farting, while the Roman army marched alongside an array of crucified bodies. So avant-garde.
How Google trained this model, no one knows—DeepMind does not want to disclose. But creators around the world are right to worry about their videos being lifted off the internet without their knowledge. Google had told TechCrunch that they may have used some YouTube videos to train the Veo models.
Some folks think this is just another hype, while many others are distressed, and think of this as too disruptive, too dangerous. It may be too early to tell, although the potential for abuse is sticking out like a sore thumb.
Thankfully, Google also simultaneously announced its SynthID Detector, which helps detect the watermarks on Google’s AI content.
Currently, access to SynthID is limited—and you join the waitlist here.
Once you have access, you simply upload the content on the detector, and let it scan for watermarks and give you the results.
However, researchers have long warned that watermarking is not a foolproof system, and can be broken through various methods, like compression, pixelation, cropping, resizing etc.
The Telegram And Grok Team-up: What Could Go Wrong? Gulp!
There is apparently a partnership agreement brewing between Elon Musk’s xAI and Telegram, to bring its AI chatbot Grok to the messaging app’s 1 billion+ users.
Yes, Grok, the chatbot which recently began spouting the debunked “white genocide” conspiracy theory concerning white farmers in South Africa for no apparent reason, teaming up with Telegram—which was, until recently, a safe haven for drug dealers, and distributors of deepfake porn and child sex abuse materials.
The details of this deal aren’t entirely transparent. While Telegram co-founder Pavel Durov claimed that Telegram will receive “$300M in cash and equity from xAI, plus 50% of the revenue from xAI subscriptions sold via Telegram,” Musk replied to his tweet saying, “No deal has been signed.”
South Korea Cracks Down On Election Deepfakes
South Korea’s National Election Commission filed its first criminal complaints under the new anti-deepfake laws, ahead of the presidential elections on June 3.
Three individuals have been charged with creating AI-generated content to influence voters, which include 35 manipulated images of candidates, and 10 fake videos featuring news anchors.
The revised Public Official Election Act, which took effect on December 2023, prohibits the use deepfakes in election campaigns during a 90 day period ahead of voting, with violators facing seven years imprisonment or fines up to ₩50 million (US$36,500).
MESSAGE FROM OUR SPONSOR
Learn how to make AI work for you
AI won’t take your job, but a person using AI might. That’s why 1,000,000+ professionals read The Rundown AI – the free newsletter that keeps you updated on the latest AI news and teaches you how to use it in just 5 minutes a day.
Latest On AI and Deepfakes
The UK government’s decision to continue with the changes to its Copyrights Law, which will allow AI companies to train their models on the creative work of artists without any penalties or remunerations.
This has sparked the ire of the British creative community, with musician Elton John calling the government “absolute losers.”
Google recently launched its own in-house detector of AI-generated content, called SynthID Detector. There is a catch, this detector only works with content generated by Google’s AI models Gemini, Imagen, Lyria and Veo.
A recent article by MIT Technology Review highlights the rapidly growing energy demands of large language models, and its contributions to increasing the carbon footprints of Big Tech companies.
MESSAGE FROM OUR SPONSOR
Find out why 1M+ professionals read Superhuman AI daily.
In 2 years you will be working for AI
Or an AI will be working for you
Here's how you can future-proof yourself:
Join the Superhuman AI newsletter – read by 1M+ people at top companies
Master AI tools, tutorials, and news in just 3 minutes a day
Become 10X more productive using AI
Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!
↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel
↪️ Join Our Community of TruthSeekers