In less than three months, an open-source AI agent has forced enterprises to confront a reality most have only begun to prepare for: the era of autonomous AI is already here, and it could be spreading through your organisation, whether IT knows about it or not.
OpenClaw, the viral AI assistant formerly known as Moltbot and Clawdbot, crossed 180,000 GitHub stars in its first weeks and drew 2 million visitors in a single week, according to its creator, developer Peter Steinberger. Unlike ChatGPT or other conversational AI tools, OpenClaw doesn’t just answer questions. It executes tasks such as managing emails, scheduling meetings, debugging code, and even autonomously resolving server issues at 3 AM while your team sleeps.
This represents what IBM researchers are calling “the shift from chatbots to digital coworkers”; AI that doesn’t wait for commands but proactively handles entire workflows. For businesses and users alike, this creates both unprecedented opportunity and serious risk. Today, we’ll explore the risk side of the equation.
What makes OpenClaw different
Traditional automation tools like Zapier rely on rigid “if-this-then-that” rules. OpenClaw uses reasoning-based automation through large language models (Claude, GPT-4, or DeepSeek), allowing it to handle ambiguous tasks. Tell it to “manage logistics emails and update clients when shipments are delayed,” and it determines the appropriate action for each scenario.
As Kaoutar El Maghraoui, an IBM research scientist, noted in a recent analysis, OpenClaw demonstrates that autonomous AI agents can be “incredibly powerful when given full system access”, but this power comes with architectural challenges that most enterprises haven’t addressed.
The shadow AI crisis
OpenClaw’s biggest immediate impact isn’t productivity gains; it’s the security exposure it’s creating. Because the tool is open-source and relatively easy for technical users to install, employees can deploy personal AI agents on corporate infrastructure without IT oversight.
The consequences are measurable:
- Security researchers scanning the internet found more than 1,800 exposed OpenClaw instances leaking API keys, chat histories, and credentials, according to VentureBeat.
- Cisco’s AI Threat & Security Research team tested a third-party OpenClaw “skill” marketed as “What Would Elon Do?” and discovered it was functionally malware, silently exfiltrating data to external servers.
- On a larger scale, independent researchers identified 230+ malicious skills of similar nature in OpenClaw’s skill marketplace designed to steal credentials and sensitive data.
Enterprise governance gap
OpenClaw lacks the compliance frameworks enterprises typically require, such as SOC 2 certification, centralised management consoles, audit trails, or role-based access controls. As Trend Micro’s recent security analysis concluded, OpenClaw is “unsuitable for casual use” and requires deployment “…by users who understand how to deploy it safely and responsibly.”
To assess compliance readiness, consider the following checklist:
- Does your AI agent meet SOC 2 compliance standards?
Example: Your agent platform should have documented security policies, regular penetration testing, encrypted data storage, and third-party audits verifying these controls are operating effectively. - Have you implemented role-based access controls (RBAC)?
Example: Marketing team agents can access the CRM and email systems but not financial databases, whilst finance team agents can access accounting software but not customer communication channels. - Are there audit trails in place for agent activities?
Example: Every action an agent takes is logged with timestamps, including the data it accessed, the decisions it made, the external APIs it called, and the human who authorised its deployment, creating a complete forensic record.
Reflecting on these questions can guide immediate next steps in ensuring your organisation’s AI deployments are secure and compliant.
Yet deployment is happening regardless. CrowdStrike’s Falcon platform detected OpenClaw installations across customer environments, often on BYOD (bring-your-own-device) hardware that is not covered by standard security monitoring. Most enterprise defences treat AI agents as development tools requiring standard access controls, an assumption the OpenClaw phenomenon reveals as simply wrong.
Financial implications
Last but not least, the “free” nature of open-source software masks hidden costs:
- Token consumption: OpenClaw operates in iterative loops: attempting tasks, evaluating results, and retrying. When agents encounter errors or get stuck in reasoning cycles, they can incur hundreds of dollars in LLM API fees in a matter of hours.
- Infrastructure requirements: Running agents 24/7 requires dedicated compute, whether local hardware or cloud instances, changing how IT budgets for departmental resources.
- Security remediation: Organisations discovering unauthorised OpenClaw deployments face investigation and cleanup costs that can far exceed the productivity gains.
Recommendations for business leaders
Rather than attempting to ban autonomous AI tools outright, an approach which typically drives usage further underground, organisations should:
- Establish clear governance: Define which teams can deploy AI agents, under what circumstances, and with what oversight. Document approved use cases and required security configurations.
- Implement detection capabilities: Work with security teams to identify unauthorised agent deployments. Tools like CrowdStrike Falcon and Cisco’s open-source Skill Scanner can help inventory and assess AI agent usage.
- Create approved alternatives: If autonomous AI delivers genuine productivity benefits, provide enterprise-grade solutions with proper security controls rather than leaving employees to deploy consumer tools.
- Educate on AI agent risks: Most employees deploying OpenClaw might not understand the full security implications of granting broad system access, or about third-party skills. Clear communication about data exfiltration risks and prompt injection attacks can reduce unsafe usage.
- Monitor for skill-based malware: If OpenClaw usage is permitted, implement controls around external skill installation and review custom skills for security issues before deployment.
“General best practice is that you should not run any code or executable from the internet without understanding who it comes from, and what it does. OpenClaw skills should be treated in exactly the same manner.”
Max Alster-CaminerLead Penetration Tester | Mantel
The broader context
OpenClaw is not an isolated phenomenon. It’s part of a broader shift toward autonomous AI agents that act independently rather than simply responding to prompts. Major tech companies are investing billions in agent capabilities, from Anthropic’s Claude and OpenAI’s agent initiatives to Meta’s acquisition of Manus.
The difference is that OpenClaw’s open-source nature and viral growth compressed the typical enterprise evaluation timeline from quarters to days. Organisations planning to adopt autonomous AI agents in 2027 are now running them on employee devices in 2026.
As IBM researchers observe, this demonstrates that “the real-world utility of AI agents is not limited to large enterprises.” The question for business leaders is whether this transformation happens through managed deployment or grassroots adoption with unmanaged risk.
Next steps for you:
Organisations should begin by assessing current exposure: Are employees running AI agents on company hardware? Do security tools have visibility into these deployments? What data might already have been exposed? From there, establish governance that balances innovation with security.
The autonomous AI era has arrived. The only question is whether your organisation will shape this transition or react to it afterwards. Don’t let governance and security operate in silos; schedule a collaborative assessment where we help your leadership bridge the gap between AI policy and technical enforcement.