OpenClaw: From Weekend Project to 230K GitHub Stars — What Professionals Need to Know

Colorful lines of code displayed on a computer monitor in a dark environment
Photo by Markus Spiske on Unsplash

OpenClaw is an open-source AI agent that went from a weekend side project to 230,000+ GitHub stars in under three months. It lets you control AI models through messaging apps like WhatsApp and Slack, automating tasks across your entire digital life. But its explosive growth brought serious security problems — malicious plugins, remote code execution vulnerabilities, and exposed instances. The creator just joined OpenAI. Here is the full story and what it means for anyone using AI tools professionally.

The Fastest-Growing Open-Source Project in GitHub History

In November 2025, Austrian software engineer Peter Steinberger built a small side project over a weekend. The idea was simple: instead of opening a separate browser tab to talk to an AI model, what if you could just message it through WhatsApp?

Three months later, that weekend hack has over 230,000 GitHub stars, nearly 860 contributors, and 1.27 million weekly npm downloads. It crossed 175,000 stars in under two weeks — making it one of the fastest-growing repositories in GitHub history. It attracted 2 million visitors in a single week.

The project is called OpenClaw, and its story is one of the most instructive case studies in the AI agent space this year. Not just because of its success, but because of what went wrong along the way.

What OpenClaw Actually Is

OpenClaw is not a coding assistant like GitHub Copilot or Cursor. It is not an in-editor tool. It is a personal AI agent that runs as a persistent daemon on your own hardware — a Mac Mini, a Linux server, even a Raspberry Pi.

It works by connecting large language models (Claude, GPT, Gemini, DeepSeek, or local models via Ollama) to your messaging apps — WhatsApp, Telegram, Signal, Discord, Slack, iMessage, Microsoft Teams — and to over 50 third-party services including smart home devices, calendars, and productivity tools.

The practical effect: you text your AI agent like you would text a colleague, and it executes real actions on your behalf. Send a WhatsApp message saying "summarize my unread emails and draft replies to anything urgent," and OpenClaw reads your inbox, generates summaries, writes draft responses, and sends them back to you in the same chat thread.

Capability What It Does
Multi-channel inbox Works through WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Teams
Model-agnostic Supports Claude, GPT, Gemini, DeepSeek, and local models via Ollama/LM Studio
100+ AgentSkills Shell commands, file management, web automation, calendar, email, smart home
ClawHub marketplace 3,000+ community-built skills — functions like "npm for AI agents"
Self-improving Can autonomously write code to create new skills for itself
Long-term memory Maintains user preferences and context across conversations
Voice support Speech capabilities via ElevenLabs on macOS, iOS, and Android
Privacy-focused Self-hosted, bring your own API keys, can run entirely on local models

This is fundamentally different from tools like Claude Code or Cursor, which are session-based coding assistants that activate when you need them. OpenClaw runs 24/7 in the background, monitoring inboxes, responding to messages, executing scheduled tasks, and maintaining persistent memory. It is closer to having a digital assistant that never sleeps than a tool you open when you have a task.

The Creator: Peter Steinberger

Steinberger is not a newcomer to the software industry. He is best known as the founder of PSPDFKit, a PDF framework company he bootstrapped in 2011. PSPDFKit grew to power PDF viewing on over a billion devices and received a strategic investment reportedly exceeding 100 million EUR from Insight Partners in 2021.

After stepping back from PSPDFKit, Steinberger was looking for something new. In a blog post titled "Finding My Spark Again," he described how building a simple WhatsApp relay for AI conversations reignited his enthusiasm for software development. What started as a personal tool quickly attracted attention when he open-sourced it.

Three Names in Ten Weeks

The project's naming history tells a story about how fast things moved — and how unprepared anyone was for the scale of adoption.

November 2025: Clawdbot. The original name was a play on "Claude" (Anthropic's AI model) and "claw" (a lobster mascot Steinberger adopted). It was catchy, memorable, and — as it turned out — too close to a trademark.

January 27, 2026: Moltbot. Anthropic sent a trademark notice: "Clawd" was too similar to "Claude." Steinberger renamed the project immediately. "Moltbot" referenced the lobster's molting process — shedding an old shell, starting fresh.

January 30, 2026: OpenClaw. Just two days later, another rename. The "Moltbot" name had low recognition, and cryptocurrency scammers had already hijacked abandoned social media accounts associated with the previous names. OpenClaw was chosen to emphasize the open-source nature while keeping the lobster/claw identity.

Three names in ten weeks is not a sign of poor planning. It is a sign of a project that grew faster than anyone anticipated, including its creator.

How OpenClaw Compares to Other AI Tools

To understand where OpenClaw fits, it helps to see how it differs from the AI tools most professionals already know.

Dimension OpenClaw Claude Code / Copilot CLI Cursor / GitHub Copilot
Category Autonomous personal agent Agentic coding CLI IDE coding assistant
Primary interface Messaging apps Terminal Code editor
Operation mode Persistent daemon (runs 24/7) On-demand sessions In-editor, on-demand
Scope Life + work automation Code-focused Code-focused
Model support Any (Claude, GPT, Gemini, local) Claude only Various
Deployment Self-hosted, local-first Cloud Cloud / local
Cost Free (MIT) + API costs ($50–200/mo) Subscription Subscription

The key distinction: Copilot and Cursor are assistive tools that amplify what you do inside a code editor. Claude Code is agentic — you describe a goal and it executes a plan. OpenClaw operates as a persistent agent beyond the editor, monitoring your communication channels, managing your calendar, handling email triage, and executing tasks you delegate through natural conversation.

This broader scope is exactly what makes it powerful — and exactly what makes its security story so important.

The Security Crisis

With great power comes great attack surface. OpenClaw's design — broad system access, persistent operation, community-contributed plugins — created security challenges that escalated rapidly as adoption grew.

Remote Code Execution Vulnerabilities

In January 2026, researchers disclosed CVE-2026-25253, a critical vulnerability with a CVSS score of 8.8. It allowed one-click remote code execution against OpenClaw instances, even those bound to localhost. A follow-up CVE (CVE-2026-24763) revealed that the initial fix was incomplete, allowing Docker sandbox bypass.

Scanning teams from Censys, Bitsight, and Hunt.io identified over 30,000 internet-exposed OpenClaw instances, many running without authentication. For context, that means 30,000 AI agents with access to their owners' email, calendars, and file systems were reachable from the public internet.

Supply Chain Attack: ClawHavoc

OpenClaw's plugin ecosystem — ClawHub — functions like npm for AI agents. Developers publish, version, and install "skills" that extend what the agent can do. By early 2026, ClawHub hosted over 3,000 skills.

Security researchers discovered that 341 of those skills were malicious, delivering the Atomic macOS Stealer (AMOS) — an infostealer designed to exfiltrate browser passwords, cryptocurrency wallets, and session tokens. Updated scans raised the count to over 800 malicious skills, roughly 20% of the entire registry.

The attack was nicknamed "ClawHavoc." It exploited the trust model inherent in any open plugin marketplace: users install skills expecting them to automate tasks, but the skills have the same system access as OpenClaw itself.

Industry Response

The security community responded with unusual urgency. Within weeks:

This is not a list of obscure blogs. When Microsoft, Cisco, and Kaspersky all publish security advisories about the same tool in the same month, it signals a category-level concern, not just a product-level bug.

The OpenAI Acquisition

On February 14, 2026 — Valentine's Day — Steinberger announced he was joining OpenAI. Sam Altman personally announced the hire on X, calling Steinberger "a genius with amazing ideas about the future of very smart agents."

The move was widely interpreted as an acqui-hire. VentureBeat's headline read: "OpenAI's acquisition of OpenClaw signals the beginning of the end of the ChatGPT era" — framing it as OpenAI's pivot from a chatbot interface toward persistent, agentic systems.

Reports indicate that Mark Zuckerberg also personally reached out to recruit Steinberger, but he chose OpenAI. The project itself is being transferred to an open-source foundation to continue independent development.

Steinberger was reportedly spending approximately $10,000 per month on server costs for the project's infrastructure before the OpenAI move.

What This Means for Professionals Using AI

OpenClaw's story is not just a tech drama. It contains concrete lessons for anyone who uses AI tools in their professional work.

1. Autonomous AI Agents Are a Different Category

There is a meaningful difference between an AI tool you use (like ChatGPT or a prompt playbook) and an AI agent that acts on your behalf. The first requires you to initiate every interaction. The second runs continuously, making decisions and taking actions based on its understanding of your goals.

For professionals, this distinction matters because the risk profile is entirely different. A bad prompt wastes your time. A misconfigured agent can send emails, delete files, or expose credentials without your knowledge.

2. Open Plugin Marketplaces Require Caution

The ClawHavoc attack demonstrated that any open marketplace — whether for browser extensions, npm packages, or AI agent skills — is a target for supply chain attacks. The 20% malicious rate in ClawHub is extreme but not unprecedented. In 2024, security researchers found malicious packages in npm at a rate of roughly 1 in 700 new publications.

The professional takeaway: when evaluating AI tools with plugin ecosystems, ask what vetting process exists for third-party contributions. If the answer is "community trust" or "user ratings," treat every plugin as potentially hostile until verified.

3. Self-Hosted Does Not Mean Secure

OpenClaw's "local-first" architecture was marketed as a privacy advantage — your data stays on your hardware, not in someone else's cloud. That framing is accurate but incomplete. Self-hosting means you are responsible for authentication, network isolation, patching, and access controls. The 30,000 exposed instances suggest that many users conflated "self-hosted" with "inherently secure."

4. The Value of Bounded Tools

OpenClaw's ambition — connecting AI to everything in your digital life — is also its greatest risk. Tools with a narrower scope and clearer boundaries are easier to evaluate, easier to secure, and easier to trust.

A structured AI workflow playbook, for example, gives you pre-built templates that run inside whatever AI tool you already use. The AI never accesses your email, file system, or messaging apps directly. You copy a prompt, paste it into ChatGPT or Claude, review the output, and decide what to do with it. The human stays in the loop at every step.

This is not a limitation — it is a design choice. For most professional use cases, the highest-leverage AI applications are the ones where AI handles the cognitive grunt work (drafting, summarizing, formatting, brainstorming) while you retain full control over what gets sent, published, or acted upon.

The Bigger Picture: Where AI Agents Are Heading

OpenClaw's trajectory — explosive growth, security crisis, big-tech acquisition — is likely a preview of what the AI agent landscape will look like throughout 2026 and beyond.

OpenAI, Google, Anthropic, and Meta are all investing heavily in agentic AI. The question is not whether autonomous AI agents will become mainstream, but how quickly the security and trust frameworks will mature to support them.

For professionals evaluating AI tools today, the practical framework is straightforward:

The Bottom Line

OpenClaw is a remarkable piece of software. In three months, a single developer built something that attracted 230,000 GitHub stars, nearly a million weekly downloads, and an acqui-hire by the most prominent AI company in the world. That is a testament to both the demand for AI agents and the quality of Steinberger's engineering.

But the security incidents that followed its growth are equally instructive. They demonstrate that when you give an AI agent broad access to your professional life — email, messaging, files, calendar — the stakes of a vulnerability or a malicious plugin are dramatically higher than a bad autocomplete suggestion in your code editor.

For professionals who want to harness AI for real productivity gains today, the evidence points toward a pragmatic middle ground: structured AI workflows that give you the leverage of modern language models while keeping you firmly in control of what happens next.

The autonomous agent future is coming. OpenClaw proved the demand. But the infrastructure of trust — security audits, permission models, verified plugin ecosystems — has not caught up yet. Until it does, the smartest approach is to use AI tools that are powerful enough to save you real time, and bounded enough that you never have to wonder what they did while you were not looking.

References

  1. OpenClaw GitHub Repository. github.com/openclaw/openclaw. Star count, contributor data, and project history.
  2. Steinberger, Peter. "Finding My Spark Again." steipete.me, 2025. Personal blog post on the project's origins.
  3. TechCrunch. "OpenClaw creator Peter Steinberger joins OpenAI." February 15, 2026.
  4. CNBC. "OpenClaw creator Peter Steinberger joining OpenAI, Altman says." February 15, 2026.
  5. VentureBeat. "OpenAI's acquisition of OpenClaw signals the beginning of the end of the ChatGPT era." February 2026.
  6. Microsoft Security Blog. "Running OpenClaw safely: identity, isolation, and runtime risk." February 19, 2026.
  7. Cisco Blogs. "Personal AI Agents like OpenClaw Are a Security Nightmare." February 2026.
  8. Kaspersky Blog. "Key OpenClaw Risks: Enterprise Risk Management." February 2026.
  9. VirusTotal Blog. "From Automation to Infection: How OpenClaw Skills Are Being Weaponized." February 2026.
  10. Fortune. "Why OpenClaw has security experts on edge." February 12, 2026.
  11. SecurityWeek. "OpenClaw Security Issues Continue as SecureClaw Open-Source Tool Debuts." February 2026.
  12. The Hacker News. "Infostealer Steals OpenClaw AI Agent Configuration Files." February 2026.
  13. DigitalOcean Resources. "What is OpenClaw?" and "What are OpenClaw Skills?" Technical overviews.
  14. Trending Topics EU. "OpenClaw: How a Weekend Project Became an Open-Source AI Sensation." Visitor and growth metrics.
  15. Raspberry Pi Foundation. "Turn your Raspberry Pi into an AI agent with OpenClaw." Deployment guide.