In our previous piece, we argued that trust architecture, not model capability, will define how organisations successfully navigate the agentic era. We explored how OpenClaw is shifting AI from a tool we use to a digital colleague we delegate to, and what that demands from the platforms adopting it. Now it’s time to examine what that shift looks like on the ground, and what happens when organisations aren’t ready for it.
The answer is already playing out. In less than three months, this open-source AI agent has forced enterprises to confront a reality most have only begun to prepare for: the autonomous AI era is already here, and it could be spreading through your organisation whether IT knows about it or not.
Unlike ChatGPT or other conversational AI tools, OpenClaw doesn’t just answer questions. It executes tasks such as managing emails, scheduling meetings, debugging code, and even autonomously resolving server issues at 3 AM while your team sleeps. This is the “digital colleague” transition we described in Part 1, and today we’ll explore what it means for your risk posture.
What Makes OpenClaw Different
Traditional automation tools like Zapier rely on rigid “if-this-then-that” rules. OpenClaw uses reasoning-based automation through large language models (Claude, GPT-4, or DeepSeek), allowing it to handle ambiguous tasks. Tell it to “manage logistics emails and update clients when shipments are delayed,” and it determines the appropriate action for each scenario.
As Kaoutar El Maghraoui, an IBM research scientist, noted in a recent analysis, OpenClaw demonstrates that autonomous AI agents can be “incredibly powerful when given full system access”, but this power comes with architectural challenges that most enterprises haven’t addressed.
Those challenges are, at their core, the trust architecture challenges we outlined previously, but they’re arriving faster than governance frameworks can keep pace.
The Shadow AI Crisis
OpenClaw’s biggest immediate impact isn’t productivity gains; it’s the security exposure it’s creating. Because the tool is open-source and relatively easy for technical users to install, employees can deploy personal AI agents on corporate infrastructure without IT oversight, which is precisely the shadow AI risk that a robust trust model is designed to prevent.
The consequences are measurable:
- Security researchers scanning the internet found more than 1,800 exposed OpenClaw instances leaking API keys, chat histories, and credentials, according to VentureBeat.
- Cisco’s AI Threat & Security Research team tested a third-party OpenClaw “skill” marketed as “What Would Elon Do?” and discovered it was functionally malware, silently exfiltrating data to external servers.
- Independent researchers also identified 230+ malicious skills in OpenClaw’s skill marketplace designed to steal credentials and sensitive data.
The core problem: OpenClaw requires broad permissions to function effectively. Users frequently grant it full file-system access, control over email accounts, and API keys to corporate tools like Slack and Gmail. As AI researcher Simon Willison warns, this creates a “lethal trifecta”: access to private data, exposure to untrusted content, and the ability to communicate externally. When combined, attackers can use prompt injection – malicious instructions hidden in emails or documents – to trick the agent into accessing and transmitting confidential information.
CrowdStrike’s research confirms that these attacks can occur “without a single alert being sent” because traditional security tools monitor unauthorised access, not semantic manipulation of AI agents.
Enterprise Governance Gap
This is where the absence of trust architecture becomes acutely tangible.
OpenClaw lacks the compliance frameworks enterprises typically require, such as SOC 2 certification, centralised management consoles, audit trails, or role-based access controls. As Trend Micro’s recent security analysis concluded, OpenClaw is “unsuitable for casual use” and requires deployment “…by users who understand how to deploy it safely and responsibly.”
To assess compliance readiness, consider the following checklist:
Does your AI agent meet SOC 2 compliance standards? Example: Your agent platform should have documented security policies, regular penetration testing, encrypted data storage, and third-party audits verifying these controls are operating effectively.
Have you implemented role-based access controls (RBAC)? Example: Marketing team agents can access the CRM and email systems but not financial databases, whilst finance team agents can access accounting software but not customer communication channels.
Are there audit trails in place for agent activities? Example: Every action an agent takes is logged with timestamps, including the data it accessed, the decisions it made, the external APIs it called, and the human who authorised its deployment, creating a complete forensic record.
Reflecting on these questions can guide immediate next steps in ensuring your organisation’s AI deployments are secure and compliant.
Yet deployment is happening regardless. CrowdStrike’s Falcon platform detected OpenClaw installations across customer environments, often on BYOD (bring-your-own-device) hardware that is not covered by standard security monitoring. Most enterprise defences treat AI agents as development tools requiring standard access controls, an assumption the OpenClaw phenomenon reveals as simply wrong.
Financial Implications
Last but not least, the “free” nature of open-source software masks hidden costs:
- Token consumption: OpenClaw operates in iterative loops: attempting tasks, evaluating results, and retrying. When agents encounter errors or get stuck in reasoning cycles, they can incur hundreds of dollars in LLM API fees in a matter of hours.
- Infrastructure requirements: Running agents 24/7 requires dedicated compute, whether local hardware or cloud instances, changing how IT budgets for departmental resources.
- Security remediation: Organisations discovering unauthorised OpenClaw deployments face investigation and cleanup costs that can far exceed the productivity gains.
Recommendations for Business Leaders
Rather than attempting to ban autonomous AI tools outright, an approach which typically drives usage further underground, organisations should:
- Establish clear governance: Define which teams can deploy AI agents, under what circumstances, and with what oversight. Document approved use cases and required security configurations.
- Implement detection capabilities: Work with security teams to identify unauthorised agent deployments. Tools like CrowdStrike Falcon and Cisco’s open-source Skill Scanner can help inventory and assess AI agent usage.
- Create approved alternatives: If autonomous AI delivers genuine productivity benefits, provide enterprise-grade solutions with proper security controls rather than leaving employees to deploy consumer tools.
- Educate on AI agent risks: Most employees deploying OpenClaw might not understand the full security implications of granting broad system access, or about third-party skills. Clear communication about data exfiltration risks and prompt injection attacks can reduce unsafe usage.
- Monitor for skill-based malware: If OpenClaw usage is permitted, implement controls around external skill installation and review custom skills for security issues before deployment.
“You should not run any code or executable from the internet without first understanding who makes it, and what it does. OpenClaw skills should be treated in exactly the same manner.”
Cian O'SullivanClient Engagement Principal - Cyber Security | Mantel
The broader context
OpenClaw is not an isolated phenomenon. It’s part of the broader shift toward autonomous agents we described in Part 1, the move from AI as a tool we use to an entity we delegate to. Major tech companies are investing billions in agent capabilities, from Anthropic’s Claude and OpenAI’s agent initiatives to Meta’s acquisition of Manus.
The difference is that OpenClaw’s open-source nature and viral growth compressed the typical enterprise evaluation timeline from quarters to days. Organisations planning to adopt autonomous AI agents in 2027 are now running them on employee devices in 2026.
As IBM researchers observe, this demonstrates that “the real-world utility of AI agents is not limited to large enterprises.” The question for business leaders is whether this transformation happens through managed deployment, built on the kind of trust architecture we outlined previously, or through grassroots adoption with unmanaged risk.
Next steps for you:
Organisations should begin by assessing current exposure: Are employees running AI agents on company hardware? Do security tools have visibility into these deployments? What data might already have been exposed? From there, establish governance that balances innovation with security. The autonomous AI era has arrived. The only question is whether your organisation will shape this transition or react to it afterwards.
This is the second part in a series on OpenClaw, AI agents, and the effect they will have on security postures and interaction design going forward. Stay tuned for the next part in this series.
See how we’re helping businesses scale with AI-first solutions