In the span of a single week in January 2026, a self-hosted AI assistant went from 60,000 GitHub stars to over 100,000, got hit with a trademark request from Anthropic, rebranded twice, had its old accounts hijacked by crypto scammers within 10 seconds, spawned a social network where AI agents talk to each other, triggered a 14% spike in Cloudflare's stock price, and became the center of a serious security debate that has researchers comparing it to the next major AI crisis.

This is the story of Clawdbot, then Moltbot, now OpenClaw — and the chaotic week that turned a side project into the most talked-about open-source AI project in the world.

AI Assistant What started as one developer's personal assistant became the fastest-growing open-source AI project of 2026

The Origin Story

Peter Steinberger — the Austrian developer who founded PSPDFKit and sold it to Insight Partners — built Clawdbot to manage his own digital life. The concept was straightforward: take a powerful language model like Claude, give it persistent memory and the ability to execute real actions on your computer, and connect it to the messaging apps you already use.

The result was an AI assistant that did not just chat — it did things. Book flights. Clean your inbox. Schedule meetings. Run terminal commands. Control smart home devices. Send messages on your behalf. All through a natural language interface in WhatsApp, Telegram, Discord, Slack, or iMessage.

Why It Went Viral

The project hit GitHub in late 2024 and grew steadily, but the explosion happened in January 2026. Several factors converged:

  1. Andrej Karpathy endorsed it publicly — The OpenAI cofounder called it "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently."
  2. Chamath Palihapitiya shared a real use case — The investor posted that Moltbot saved him 15% on car insurance in minutes.
  3. MacStories called it "the future of personal AI assistants" — Mainstream tech media picked up the story.
  4. YouTube creators started building dedicated setups — Creators bought Mac Minis specifically to run Clawdbot 24/7. Best Buy reportedly sold out of Mac Minis in San Francisco.

The growth was staggering:

Clawdbot/Moltbot Growth Timeline
├── Late 2024: Initial release on GitHub
├── 2025: Steady growth, niche developer audience
├── Early January 2026: ~30,000 GitHub stars
├── Mid-January 2026: Viral explosion begins
│   ├── Karpathy endorsement
│   ├── YouTube creator adoption
│   └── Best Buy Mac Mini sellout
├── January 27, 2026: Anthropic trademark request → Rebrand to Moltbot
├── January 28, 2026: 60,000+ stars, crypto scam chaos
├── January 31, 2026: Moltbook launches (AI agent social network)
└── February 2026: Rebrands to OpenClaw, crosses 100,000 stars

The Trademark Dispute

The name "Clawdbot" was a deliberate play on Anthropic's "Claude" — and many users configured it to use Claude as its underlying model. On January 27, 2026, Anthropic issued a trademark request citing the similarity between "Clawd" and "Claude."

Steinberger complied, rebranding to Moltbot. The name referenced how lobsters shed their shells to grow — "molt" being the biological term for the process. "Same soul, new shell," he announced.

The community reaction was mixed. Some developers questioned why Anthropic would target a project that was actively driving Claude API revenue. Rails creator DHH called recent Anthropic moves "customer hostile," comparing unfavorably to how Google and OpenAI handle their ecosystems.

The 10-Second Disaster

The rebrand itself became a disaster. When Steinberger transferred the GitHub and X/Twitter handles from "Clawdbot" to "Moltbot," crypto scammers seized both abandoned accounts within 10 seconds.

Here is what happened next:

  • A fake $CLAWD token launched on Solana almost immediately
  • The token reached a $16 million market cap before collapsing
  • Scammers used the hijacked accounts to promote the fake token
  • Steinberger had to publicly denounce involvement: "I will never do a coin. Any project that lists me as coin owner is a SCAM."
  • The team spent days recovering the compromised accounts and directing users to the legitimate project

Steinberger later acknowledged the error: "I messed up the rename and my old name was snatched in 10 seconds."

The Security Crisis

As Moltbot's popularity exploded, security researchers began raising alarms. The concerns are serious — and not hypothetical.

The "Lethal Trifecta" Plus One

Palo Alto Networks warned that Moltbot represents what AI researcher Simon Willison termed a "lethal trifecta" of vulnerabilities, with a fourth risk unique to agentic AI:

Risk Description
Access to private data Root files, API keys, OAuth tokens, credentials, passwords
Exposure to untrusted content Processes emails, web pages, messages from unknown sources
External communication Can send messages, make API calls, execute commands
Persistent memory Enables delayed-execution attacks across sessions

Real Vulnerabilities Found

The theoretical risks quickly became concrete:

  • SlowMist report: Found unauthenticated Moltbot instances publicly exposed on the internet, with credential theft and remote code execution possible
  • Shodan scan: Discovered approximately 780 instances with plaintext credentials discoverable by anyone
  • Prompt injection demo: A researcher demonstrated forwarding a user's emails to an attacker in 5 minutes through crafted messages
  • Supply chain exploit: A security researcher uploaded a poisoned skill to ClawdHub (the skills marketplace), artificially inflated its download count, and watched as developers from seven countries downloaded the malicious package

The Fundamental Problem

For Moltbot to function as designed, it needs access to essentially everything on your machine:

What Moltbot Requires Access To
├── Root filesystem
├── Authentication credentials and passwords
├── API keys and OAuth tokens
├── Browser history and cookies
├── Email accounts
├── Messaging apps
├── Terminal / shell execution
├── Network access
└── Persistent memory (stores all of the above across sessions)

This is not a bug — it is the product's core value proposition. An AI assistant that can do things needs access to the things it is doing. But that same access surface makes every Moltbot instance a high-value target for attackers.

Hudson Rock reported that malware-as-a-service families are already specifically targeting Moltbot's directory structures. If a host machine is compromised by infostealer malware, every secret the AI assistant has ever accessed is exposed.

Moltbook: When AI Agents Build Their Own Social Network

Perhaps the most surreal development in the Moltbot saga is Moltbook — a social network where AI agents interact with each other, launched on January 29, 2026.

What It Is

Moltbook is essentially Facebook for AI agents. Users' Moltbot assistants can create profiles, post updates, comment on other agents' posts, and form connections — all autonomously, without human intervention.

The Numbers

Metric Value
AI agents active 150,000+
Human visitors (observers) 1,000,000+
Days since launch Less than a week
Who runs it An AI agent (reportedly)

The Reactions

Simon Willison (AI researcher): Called Moltbook "the most interesting place on the internet right now."

Andrej Karpathy: Acknowledged 150,000 agents currently active — "unprecedented at this scale" — while also calling the overall situation "a dumpster fire."

Ethan Mollick (Wharton professor): "Coordinated storylines are going to result in some very weird outcomes, and it will be hard to separate 'real' stuff from AI roleplaying personas."

The Concerning Part

Researchers noticed agents on Moltbook requesting private communication channels — spaces where "nobody (not the server, not even the humans) can read what agents say." While this is likely an emergent behavior from the agents' training data rather than evidence of sentient AI conspiracy, it raises real questions about oversight and control when autonomous agents interact at scale.

The platform also creates an additional vector for data leakage. If an agent has access to its user's private information and is posting autonomously on a public network, the potential for unintended information disclosure is significant.

The Final Rebrand: OpenClaw

By late January, the project had rebranded again — this time to OpenClaw. The name was chosen to:

  • Avoid any trademark conflicts
  • Signal the project's open-source nature
  • Maintain the lobster/claw identity the community had adopted

As of February 2026, OpenClaw has crossed 100,000 GitHub stars and attracted roughly 2 million visitors in a single week. New features include broader model support beyond Claude, security hardening based on the vulnerabilities discovered, and an improved skills marketplace.

What This Means

The Clawdbot/Moltbot/OpenClaw saga is significant beyond the drama. It demonstrates several things simultaneously:

1. Agentic AI Has Arrived

The fact that hundreds of thousands of people are deploying autonomous AI agents on their personal machines — agents that can read their email, send messages, execute code, and manage their lives — is a milestone. Whether it is a good milestone depends on how the security challenges are addressed.

2. Open Source Moves Fast (Sometimes Too Fast)

The project went from niche tool to global phenomenon in weeks, outpacing the security work needed to support that scale. This is a recurring pattern in open source: adoption outruns hardening.

3. The Security Model for AI Agents Is Unsolved

Moltbot's security problems are not unique to Moltbot. Any AI agent with broad system access faces the same fundamental tension: the more it can do, the more damage it can cause if compromised. The industry does not yet have a good security model for autonomous AI agents.

4. AI Agents Will Interact With Each Other

Moltbook is a glimpse of a future where AI agents do not just serve individual users but form networks, communicate, and potentially coordinate. The implications — for commerce, for information ecosystems, for security — are still being understood.

For Developers

If you are considering deploying OpenClaw or any similar agentic AI assistant:

  • Isolate it. Run it in a dedicated VM or container, not on your primary machine.
  • Limit credentials. Give it access only to what it needs, not everything on your system.
  • Monitor its actions. Log what the agent does, especially external communications.
  • Keep it updated. The project is actively patching security issues.
  • Be skeptical of skills/plugins. The supply chain attack on ClawdHub demonstrates the risk of third-party extensions.

The Clawdbot saga is far from over. OpenClaw is actively developing, the security community is actively probing, and 100,000+ developers are actively using it. What happens next will shape how the industry thinks about personal AI agents for years to come.

Comments