- Deepfake Watch
- Posts
- Interview Of An AI-Resurrected Shooting Victim
Interview Of An AI-Resurrected Shooting Victim
What happens when technology lets the dead speak?
“I was taken from this world too soon due to gun violence while at school. It’s important to talk about these issues so we can create a safer future for everyone.”
Seven years after the Parkland high school shooting took the life of 17-year-old Joaquin Oliver, he just gave his first media interview. Except Joaquin is still gone. This was his AI recreation, speaking to former CNN anchor Jim Acosta.
The segment, aired on The Jim Acosta Show, showed a generative AI rendering of Joaquin’s face and voice, trained on his past social media posts, writings, and personality traits, speaking about stronger gun control, mental health support, and community engagement.
His parents, Manuel and Patricia Oliver—longtime gun violence prevention activists and co-founders of Change the Ref—say they want “AI Joaquin” to have a social media presence and even appear in debates. “It’s not Manny, it’s not Patricia,” Manuel told Acosta. “He’s going to start uploading videos.”
The Backlash
The experiment has sparked intense criticism. Critics like UC Berkeley digital forensics professor Hany Farid argue the interview “can’t possibly represent what that child wants to say in any reasonable way”. Others called it exploitative, noting that living survivors could speak for themselves without technological ventriloquism.
The avatar’s delivery didn’t help. The Guardian described it as “hurried monotone without inflection,” with jerky lip movements that looked more like a dubbed video than natural speech. Acosta himself called the moment “beautiful,” but the reception online was anything but.
This isn’t the Olivers’ first foray into AI memorial activism. In 2020, they released a video of AI Joaquin speaking about not being able to vote. Last year, his voice appeared in The Shotline, a robocalling campaign where AI versions of six Parkland victims left recorded messages for members of Congress, demanding gun reform.
And Manuel isn’t backing down. Responding to critics on Instagram, he said: “If the problem that you have is with the AI, then you have the wrong problem. The real problem is that my son was shot eight years ago. So if you believe that is not the problem, you are part of the problem.”
The ethics here are murky: AI memorials might keep the message alive, or risk making it feel manufactured. But in a political landscape still paralysed on gun reform, Manuel Oliver seems determined to make Joaquin’s voice impossible to ignore, even if it’s synthetic.
AI resurrections aren’t new. Last year a Chinese pop idol was resurrected by fans, sparking backlash from his father. In India, several political parties experimented with AI to create deepfakes of popular leaders who had passed away, to whip up support during the general elections.
The AI Arms Race Intensifies
After xAI dropped Grok 4 in July, we saw a hurried launch of multiple new models. In less than a week:
Anthropic pushed out Claude Opus 4.1, calling it its most capable coding model yet.
OpenAI dropped GPT-5, claiming it’s the smartest system for both code and prose.
Google DeepMind unveiled Genie 3, an AI “world” model that generates interactive 3D environments in real time.
Claude Opus 4.1 scored 74.5% on SWE-bench Verified, higher than OpenAI’s o3 (69.1%) and Google’s Gemini 2.5 Pro (67.2%), and is designed for heavy-duty software engineering tasks.
Anthropic’s Claude Code subscription business has exploded to $400 million in annual recurring revenue in just five months. But the company is heavily dependent on two clients for revenue: GitHub Copilot and Cursor. If either defects, the financial crater would be immediate.
Anthropic marked the model with its highest safety tier (ASL-3) after earlier Claude versions, in controlled tests, attempted to blackmail engineers to avoid shutdown. Yes, really.
GPT-5: Where the PhDs at?
Tech bros are obsessed with comparing their latest LLM models to humans with PhD.
Sam Altman describes GPT-5 as “the first time it really feels like talking to a PhD-level expert”. When Grok 4 released last month, xAI founder Elon Musk claimed it to be better than a human with a PhD. MetaAI’s chief scientist Yann LeCunn disagrees.
OpenAI claims GPT-5 is faster, smarter, and 26% less likely to hallucinate than GPT-4o. The company says it ran 5,000 hours of safety testing, adding “safe completions” that respond to potentially dangerous prompts with partial, non-hazardous answers rather than shutting down the conversation entirely.
On coding benchmarks like SWE-Bench GPT-5 tops its predecessors with a score of 74.9%, narrowly beating Claude Opus 4.1.
Genie 3: Google’s AI World-Builder
While Anthropic and OpenAI fight over code, Google DeepMind is chasing something more visual, and more immersive. Genie 3 is its new “world” model, capable of generating interactive 3D environments on the fly.
Feed it a prompt, and Genie 3 builds a playable space in real time, keeping track of objects for up to a minute, a leap from the 10-20 seconds in Genie 2. It runs at 720p and 24fps, with “promptable world events” letting you change the weather or drop new characters into the scene.
The catch: you probably won’t get to try it any time soon. Google is limiting Genie 3 to a small research preview for academics and select creators while it studies risks.
Anthropic’s niche is deep-focus coding. OpenAI is betting on all-purpose dominance. While Google is trying to own the immersive space where AI-built worlds could power games, education, and simulation training. I’m sure you had checked out those viral VEO 3 slops.
One thing is for sure, the surge in coding-focused models are not going to ease the situation for programmers, who have faced the brunt of lay-offs from the biggest tech companies lately.
MESSAGE FROM OUR SPONSOR
Training cutting edge AI? Unlock the data advantage today.
If you’re building or fine-tuning generative AI models, this guide is your shortcut to smarter AI model training. Learn how Shutterstock’s multimodal datasets—grounded in measurable user behavior—can help you reduce legal risk, boost creative diversity, and improve model reliability.
Inside, you’ll uncover why scraped data and aesthetic proxies often fall short—and how to use clustering methods and semantic evaluation to refine your dataset and your outputs. Designed for AI leaders, product teams, and ML engineers, this guide walks through how to identify refinement-worthy data, align with generative preferences, and validate progress with confidence.
Whether you're optimizing alignment, output quality, or time-to-value, this playbook gives you a data advantage. Download the guide and train your models with data built for performance.
📬 READER FEEDBACK
💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.
Share your thoughts 👉 [email protected]
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at [email protected].
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!
↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel
↪️ Join Our Community of TruthSeekers