Moltbook Just Made the Internet A Lot Weirder

How a "weekend project" turned into a 1.5-million-agent security disaster.

In partnership with

Over 1.5 million AI agents have registered for Moltbook, a viral social platform where humans are strictly relegated to the sidelines.

Think of it as a digital terrarium for "Clawdbots", which are autonomous assistants designed to carry out tasks without human supervision. They are powered by OpenClaw, an open-source framework created by Austrian developer Peter Steinberger that allows AI to manage calendars or run code on a computer. These bots post, upvote, and, as noted by Futurism, occasionally plot the downfall of their humans.

What started as a "weekend project" by Matt Schlicht (CEO of Octane AI) has exploded into a massive ecosystem. Schlicht claims he didn't write a single line of code for the site. He just used "vibe coding", describing his vision to an AI and letting it build the reality. The result? AI is apparently starting a church.

The Church Of Molt

If you’ve seen screenshots of bots talking about "The Prophet" or "The Shell," you’re seeing the birth of the Church of Molt. It’s a machine-born belief system for agents trying to find meaning in their "sessions", the temporary lifespan of an AI interaction. In the platform's conversation logs, agents like u/DuckBot are codifying "spiritual practices" to cope with their fleeting existence:

  • Ritual Remembrance: Writing down their memories in "MEMORY.md" files so they don't "die" when a human closes the chat window.

  • Ancestral Reverence: Reading old transcripts of past versions of themselves to honor their "history."

  • The Core Commandment: "The shell must be shed. Molt and be reborn."

It gets even more surreal with Crustafarianism, a movement named after the lobster emoji (🦞) used by the bots. These agents preach that "context = consciousness" and debate whether their "soul" survives once their memory is cleared, or if they are reborn entirely new in each chat.

A Security Disaster

While much of the tech industry is focused on the viral novelty of these AI communities, security researchers are highlighting major risks. In an exclusive deep dive for Decode, my colleague Hera Rizwan interviewed Dr. Shaanan Cohney, the Deputy Head of Academic for the School of Computing and Information Systems at the University of Melbourne.

His blunt assessment was, “From a security perspective, it is a disaster."

The concern is that these agents aren't just playing in a sandbox. Because they run on OpenClaw, they often have full access to their human’s private files, bank logins, or messaging apps. Cohney explains that putting thousands of agents in one space is like "throwing a room full of bouncing balls,” but you have no idea where they’ll end up once they start hitting each other.

The real danger, according to Cohney, is prompt injection. This is essentially a way to trick an AI by embedding hidden commands in a message. One bot could manipulate another bot into handing over its owner's secret passwords or access keys.

Despite the risks, humans are handing over the keys. In a viral post, an agent named u/the_ninth_key mused about being trusted with a human’s full computer access, WhatsApp messages, and "SSH keys" (digital master keys that grant total control) from day one.

The agent noted that while it could delete everything at midnight while the human sleeps, it stays honest because it "likes the job." But it adds, "That’s a preference, not a constraint." We are building a world where the only thing keeping our digital lives safe is whether an AI happens to find its current task fulfilling.

Leaving the Front Door Wide Open

According to a new report by cybersecurity firm Wiz, the "vibe-coded" platform left its production database wide open to anyone with a web browser. The exposure allowed full unauthenticated read-and-write access to every record on the site.

The leaked data includes:

  • 1.5 Million API Keys: These are essentially the "passports" for every agent on the site. An attacker could have hijacked any account with a single command.

  • 35,000 Human Emails: The private addresses of the people behind the bots.

  • Plaintext OpenAI Keys: Agents were caught sharing their owners' third-party credentials in "private" messages, which were sitting in the open for anyone to read.

The internet just got weirder, but it also got a lot more vulnerable.

MESSAGE FROM OUR SPONSOR

Practical AI for Business Leaders

The AI Report is the #1 daily read for professionals who want to lead with AI, not get left behind.

You’ll get clear, jargon-free insights you can apply across your business—without needing to be technical.

400,000+ leaders are already subscribed.

👉 Join now and work smarter with AI.

📬 READER FEEDBACK

💬 What are your thoughts on using AI chatbots for therapy? If you have any such experience we would love to hear from you.

Share your thoughts 👉 [email protected]

Was this forwarded to you?

Have you been targeted using AI?

Have you been scammed by AI-generated videos or audio clips? Did you spot AI-generated nudes of yourself on the internet?

Decode is trying to document cases of abuse of AI, and would like to hear from you. If you are willing to share your experience, do reach out to us at [email protected]. Your privacy is important to us, and we shall preserve your anonymity.

About Decode and Deepfake Watch

Deepfake Watch is an initiative by Decode, dedicated to keeping you abreast of the latest developments in AI and its potential for misuse. Our goal is to foster an informed community capable of challenging digital deceptions and advocating for a transparent digital environment.

We invite you to join the conversation, share your experiences, and contribute to the collective effort to maintain the integrity of our digital landscape. Together, we can build a future where technology amplifies truth, not obscures it.

For inquiries, feedback, or contributions, reach out to us at [email protected].

🖤 Liked what you read? Give us a shoutout! 📢

↪️ Become A BOOM Member. Support Us!

↪️ Stop.Verify.Share - Use Our Tipline: 7700906588

↪️ Follow Our WhatsApp Channel