- Deepfake Watch
- Posts
- Can’t Tell What’s Real Anymore
Can’t Tell What’s Real Anymore
AI-generated videos are fueling mass panic in India, while Europe prepares to weaken its own AI regulations.
This week, a blast occurred in Delhi, and AI slop immediately crept into the news cycle. Meanwhile, the UK introduced laws to stop AI child abuse material, US regulators confronted AI therapy bots, and Europe prepared to dismantle its own AI regulations.
AI slop is no longer just lazily made content designed to game algorithms and drive engagement. The hunger for shock value is spilling into real-world consequences, turning manufactured outrage into genuine panic.
Hours after a car exploded near Delhi's Red Fort on Monday evening, November 10, killing at least 13 people, an AI-generated video went viral. It showed a synthetic man narrating the chaos, complete with sirens and smoke in the background.
The account behind it had been churning out AI-generated visuals, and decided to capitalise on the panic and attention. We could not determine whether the face in the video belonged to a real person or was itself fabricated.
This pattern is spreading. Last month, a 22-year-old journalism student in the Indian city Lucknow used ChatGPT to add a leopard to a selfie and sent it to friends. Within hours, local police and forest officials geo-located the image and showed up at his door after frantic calls from residents. Between late September and October, forest officials battled multiple leopard complaints across Lucknow neighbourhoods. All six or seven viral images turned out to be AI-generated hoaxes.
UK Moves To Stop AI Child Abuse At Source
The UK government introduced new legislation on 12 November to tackle AI-generated child sexual abuse material at the source, as reports more than doubled in the past year. Data from the Internet Watch Foundation shows reports of AI-generated child sexual abuse material surged from 199 in 2024 to 426 in 2025. Images depicting infants aged 0-2 years jumped from 5 to 92 in the same period.
Under the new law, the Technology Secretary and Home Secretary will have powers to designate AI developers and child protection organisations like the Internet Watch Foundation as authorised testers. These bodies will be empowered to scrutinise AI models and ensure safeguards are in place to prevent them from generating or proliferating child sexual abuse material, including indecent images and videos of children.
Currently, criminal liability for possessing this material means developers cannot carry out safety testing on AI models. Images can only be removed after they have been created and shared online. This measure, one of the first of its kind globally, ensures AI systems' safeguards can be tested from the start to limit production in the first place.
The severity of the material has also intensified. Category A content, involving penetrative sexual activity, sexual activity with an animal, or sadism, rose from 2,621 to 3,086 items, now accounting for 56% of all illegal material compared to 41% last year. Girls have been overwhelmingly targeted, making up 94% of illegal AI images in 2025.
Kerry Smith, Chief Executive of the Internet Watch Foundation, told The Guardian, "AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material."
US Regulators Grapple With AI Therapy Bots
On November 11, the United States’ Food and Drug Administration's advisory committee held its first public meeting on generative AI-enabled digital mental health devices, examining whether chatbots designed to offer therapeutic alternatives should be regulated as medical devices.
The same week, the American Psychological Association released a health advisory on November 12 warning that AI chatbots and wellness apps "lack scientific evidence and regulatory safeguards needed to ensure users' safety," are "prone to creating a false sense of empathy," and "give inappropriate guidance during crisis situations."
The APA called current regulatory frameworks "inadequate" and urged immediate action to prevent AI tools from "posing as licensed professionals." While over 1,200 medical devices using AI have been authorised by the FDA, none have been authorised for mental health uses.
EU Prepares To Weaken AI Act And GDPR
Leaked draft documents reported by Tech Policy Press, Netzpolitik, and the Financial Times revealed that the European Commission is preparing the "Digital Omnibus," a sweeping regulatory reform package set for 19 November unveiling that would significantly roll back key provisions of the EU AI Act and GDPR.
Proposed changes include allowing AI companies to self-declare high-risk systems as low-risk without notifying authorities, using "legitimate interest" rather than explicit consent to train AI on personal data, a one-year grace period before fines for high-risk AI violations, and postponing transparency penalties until August 2027.
Max Schrems, founder of digital rights NGO noyb, told Tech Policy Press, "It is very concerning to see Trump-ian lawmaking practices taking hold in Brussels." Over 120 civil society groups called the package the "biggest rollback of digital rights in EU history," warning changes "mainly benefit Big Tech."
MESSAGE FROM OUR SPONSOR
Find your customers on Roku this Black Friday
As with any digital ad campaign, the important thing is to reach streaming audiences who will convert. To that end, Roku’s self-service Ads Manager stands ready with powerful segmentation and targeting options. After all, you know your customers, and we know our streaming audience.
Worried it’s too late to spin up new Black Friday creative? With Roku Ads Manager, you can easily import and augment existing creative assets from your social channels. We also have AI-assisted upscaling, so every ad is primed for CTV.
Once you’ve done this, then you can easily set up A/B tests to flight different creative variants and Black Friday offers. If you’re a Shopify brand, you can even run shoppable ads directly on-screen so viewers can purchase with just a click of their Roku remote.
Bonus: we’re gifting you $5K in ad credits when you spend your first $5K on Roku Ads Manager. Just sign up and use code GET5K. Terms apply.
📬 READER FEEDBACK
💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.
Share your thoughts 👉 [email protected]
Have you been targeted using AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!
↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel
↪️ Join Our Community of TruthSeekers

