- Deepfake Watch
- Posts
- AI Surveillance Turns US Classrooms Into Crime Scenes
AI Surveillance Turns US Classrooms Into Crime Scenes
Even homework assignments can get you arrested.
In late September 2025, a 13-year-old in the US was arrested by the police for asking ChatGPT: “How to kill my friend in the middle of class.”
The cops at Volusia Sheriff's Office in DeLand, Florida, were alerted of this concerning query by Gaggle, an AI-powered student monitoring platform. When confronted, the teenager said he was just “trolling” a friend.
Two years ago, an eighth-grader in Tennessee spent a night in jail after Gaggle falsely flagged an offensive joke as a real threat. It also partially deleted an art student’s portfolio after erroneously flagging a photo of girls wearing tank tops as child pornography. Another student wrote they were “gonna die” after running a fitness test wearing Crocs, and Gaggle alerted the admin.
Schools across the United States are increasingly turning to automated surveillance technology to monitor students. However, these opaque systems are dismantling student privacy while failing to distinguish a harmless joke from a genuine threat.
False alarms and nonissues
In 2023, public schools in Lawrence school district, Kansas, introduced Gaggle to “proactively identify students who are at risk for potential unsafe behaviors, provide support where needed, and foster a safer school environment.”
The district said the system is meant to detect “self-harm, depression, thoughts of suicide, substance abuse, cyberbullying, credible threats of violence against others, or other harmful situations.”
More than 1,200 incidents were flagged in Lawrence over ten months. However, when Associated Press analysed the data through a public records request, they found nearly two-thirds of them to be deemed “nonissues”, including over 200 false alarms triggered by student homework assignments.
In Polk County, Florida, the last four years have seen 72 involuntary hospitalisations under a state law allowing authorities to conduct psychiatric evaluations if an individual is deemed to pose a risk to themselves or others, following nearly 500 alerts from Gaggle.
Gaggle was founded in 1999 by Jeff Patterson as a secure student email provider and has since evolved into a full-blown AI-powered surveillance platform reportedly monitoring approximately 6 million students across 1,500 school districts in the US.
According to US education news portal The 74, Gaggle combines AI with human moderators to scan students' emails, chat messages, documents, and online activity for keywords and images indicating self-harm, violence, substance abuse, or sexual content—flagging potential threats that are then reviewed by Gaggle's team before being escalated to school administrators or, in some cases, law enforcement.
While schools are resorting to Gaggle as a safety measure, concerns around students’ privacy and safety have mounted.
When AP and Seattle Times requested Gaggle alert data from Vancouver Public Schools in Washington state, the district handed over almost 3,500 sensitive student records—completely unredacted.
The documents revealed that students were pouring their hearts into these school-provided laptops, writing highly personal things, which were surveilled and monitored through Gaggle. The surveillance system allowed any school staff in the Vancouver school district, or anyone else with links to the files, to view everything, including student names.
According to the joint report by AP and Seattle Times, there were cases when Gaggle’s surveillance system outed LGBTQ+ children. However, the report revealed that Gaggle’s AI-powered surveillance has done little to keep children safe.
In Lawrence, where Gaggle continues to have unhindered access to student laptops, nine current and former students have filed a federal civil rights complaint against the public school district, on grounds of violation of student privacy.
Other Updates
In The US, AI Romance Is Mainstream
A survey of 1,012 U.S. adults found that 28% have had an "intimate or romantic relationship" with an AI chatbot. ChatGPT leads the pack, followed by Character.ai and voice assistants like Alexa and Siri. 53% of those with intimate AI relationships are also in successful human relationships.
Perplexity's AI Browser Is Now Free, And It Can Leak Your Data
Perplexity made its Comet AI browser free on October 2, and millions signed up for the tool that promised to manage emails, shop online, and navigate the web autonomously. Then security researchers discovered "CometJacking", a vulnerability that lets attackers hijack the AI assistant through malicious URLs and exfiltrate Gmail, Google Calendar, and other connected data.
AMD's $100 Billion Bet Against Nvidia
AMD and OpenAI announced a multi-year chip partnership on October 6, with OpenAI committing to purchase 6 gigawatts of AMD's MI450 chips beginning in late 2026, which analysts project could be worth over $100 billion. AMD also granted OpenAI warrants for up to 160 million shares, potentially giving OpenAI a 10% stake if fully exercised. AMD's stock surged 25% on announcement day, forcing competition into the AI chip market dominated by Nvidia.
MESSAGE FROM OUR SPONSOR
The AI Insights Every Decision Maker Needs
You control budgets, manage pipelines, and make decisions, but you still have trouble keeping up with everything going on in AI. If that sounds like you, don’t worry, you’re not alone – and The Deep View is here to help.
This free, 5-minute-long daily newsletter covers everything you need to know about AI. The biggest developments, the most pressing issues, and how companies from Google and Meta to the hottest startups are using it to reshape their businesses… it’s all broken down for you each and every morning into easy-to-digest snippets.
If you want to up your AI knowledge and stay on the forefront of the industry, you can subscribe to The Deep View right here (it’s free!).
📬 READER FEEDBACK
💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.
Share your thoughts 👉 [email protected]
Have you been targeted using AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!
↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel
↪️ Join Our Community of TruthSeekers