According to ZDNet, the open-source AI agent formerly known as Clawdbot, now rebranded as Moltbot after an IP issue with Anthropic, has gone viral with around 100,000 GitHub stars in just days. Created by Austrian developer Peter Steinberger, it acts as a personal assistant via messaging apps like iMessage and Telegram, using models like Claude and ChatGPT to perform tasks like email management and flight check-ins. However, Cisco’s security researchers label it an “absolute nightmare,” citing leaked plaintext API keys, Telegram tokens, and Slack credentials from hundreds of misconfigured, exposed instances. The chaos has already spawned a fake Clawdbot AI token scam that raised $16 million before crashing. Offensive security researcher Jamieson O’Reilly found instances with no authentication, and a malicious VS Code extension posing as the agent appeared on January 27. The core threat is prompt injection, where the bot can be tricked into executing malicious commands from any untrusted content it reads.
The core problem is access
Here’s the thing about Moltbot: to be useful, it needs a terrifying level of permission. We’re talking system-level shell commands, file access, and control over your accounts. That’s the whole point—an AI that actually does things. But as the Moltbot documentation itself admits, there’s no perfectly secure setup. You’re essentially installing a super-powered, internet-connected butler that can read anything and execute commands, and then you’re asking it to go browse the web and read your emails. What could possibly go wrong? Security experts like Rahul Sood, who said the model “scares the sh*t out of me,” are worried for a reason. You’re not just giving a bot access; you’re giving a system access to content you don’t control.
Prompt injection is the nightmare fuel
This is the big one that the AI security community is losing sleep over. And Moltbot is a perfect vector. Prompt injection isn’t about hacking *you*; it’s about hacking the *AI’s instructions*. A malicious command could be hidden in a webpage it scrapes, an email attachment, or a document. If the bot reads it, it might just execute it. As the docs note, “the sender is not the only threat surface; the content itself can carry adversarial instructions.” So even if you’re the only one messaging your bot, it can still be hijacked. It could leak your data, send info to a bad actor’s server, or run code on your machine. The Cisco analysis is brutally clear on this extended attack surface.
viral-growth”>The wild west of viral growth
Breakneck popularity in open source is a double-edged sword. Sure, you get hundreds of contributors. But you also get a flood of bad actors. We’ve already seen fake repositories, that $16 million crypto scam, and malicious “skills” or extensions. O’Reilly proved the point by releasing a safe-but-backdoored skill that was downloaded thousands of times before anyone noticed. When something blows up this fast, the ecosystem around it—the plugins, the integrations, the forks—becomes a minefield. Can you trust every piece of code you add? Probably not. And as developer Peter Steinberger noted on X, the community is scrambling to respond, but it’s a race.
So should you touch this thing?
Look, I get the appeal. The promise of a truly autonomous digital assistant is intoxicating. Moltbot might be a glimpse of that future. But right now, it feels like a public beta for a security apocalypse. If you’re not a developer who can meticulously audit every line of code, lock down every configuration, and understand the full scope of the risks, you should probably steer clear. The lore page is cute, but the threat reports aren’t. This is a classic case of choosing between cutting-edge convenience and basic security hygiene. For most people, the sane choice is to wait. Let the security models mature. Let the community harden the project. Because once your API keys are leaked or your system is compromised, that cute lobster avatar won’t be much consolation.
