- Deepfake Watch
- Posts
- Are You A DeepSeeker?
Are You A DeepSeeker?
If you've been reading the news over the past week, you might be asking yourself - what is this new hype in the AI industry called DeepSeek? I'm here to tell you that this is not a hype - but rather an "unhype" event.
Now, what do I mean by that?
R1 Flew Over The Cuckoo’s Nest
Tech bros from the Silicon Valley and beyond have been telling us that building cutting-edge AI requires a lot of money, a ton of expensive GPUs, and enough electricity to power a small nation in Eastern Europe.
Then, Chinese startup DeepSeek launches their R1 model. They claim to have trained their latest V3 model, on which the R1 model is based, for under US$6 million. Lennart Heim, a researcher at US-based think-tank Rand, told the Financial Times that this figure does not include other costs - like the acquisition of GPUs, salaries paid to tech workers, along with experiments, training and deployment. Researchers have argued that V3 would cost around US$500 million to $1 billion to operate.
Compare that to its biggest competitor OpenAI - whose operating costs were estimated at US$8.5 billion by The Information. A Chinese company, with restricted access to the latest Nvidia GPUs, can go toe-to-toe with OpenAI for a fraction of the cost!
No wonder the stock market tanked—Nvidia watched US$589 billion wiped out from its market value, and Nasdaq took its worst beating since 2022.
But it’s China! What about censorship, data security and all that?
I saw quite a few people post about how DeepSeek’s R1 would censor any response that is critical of the Chinese government. However, R1 is an open-source model (unlike ChatGPT) - which means you can view it, replicate it, and modify it.
As for data security woes - you can host R1 on local servers and keep your data to yourself. The Indian Minister of Electronics and Information Technology announced that India will host DeepSeek locally (given India’s tensions with China).
Is ‘Stealing’ From A ‘Thief’ Considered Theft?
You should grab some popcorn for this one. Following the stock market bloodbath, and DeepSeek’s explosive rise to popularity, OpenAI spokesperson Liz Bourgeois told the New York Times that DeepSeek might have “inappropriately distilled” their models.
What they’re basically alleging is that DeepSeek may have harvested massive amounts of data generated by OpenAI’s chatbots, and used them to train its models. This would be a violation of OpenAI’s terms of service, which prohibits its users from using such data to build competing tech.
However, OpenAI itself is facing multiple lawsuits, including by the New York Times, for making unauthorized use of copyrighted material to train its dataset. It’ll be fun to see how this plays out, especially if OpenAI decides to go against DeepSeek for doing the exact same thing it has been accused of, that too in Chinese jurisdiction.
The AI Unhype
For the past two years, tech bros have been riding the AI hype to tell investors to close their eyes and give them a lot of money without asking too many questions. Now, the assumption of massive capital requirements to build AI models is no more valid, and the AI hype train has derailed. This is why I call it an “unhype” event - a knock-back to reality and practicality.
Which means many more countries will now be encouraged to enter the AI race, and stop relying on a few big companies for their AI solutions.
MESSAGE FROM 1440 MEDIA
Receive Honest News Today
Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.
Have you been a victim of AI?
Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?
Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at deepfakewatch@boomlive.in. Your privacy is important to us, and we shall preserve your anonymity.
About Decode and Deepfake Watch
Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.
We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.
For inquiries, feedback, or contributions, reach out to us at newsletter@boomlive.in.
🖤 Liked what you read? Give us a shoutout! 📢
↪️ Become A BOOM Member. Support Us!
↪️ Stop.Verify.Share - Use Our Tipline: 7700906588
↪️ Follow Our WhatsApp Channel
↪️ Join Our Community of TruthSeekers