🌐
Sophos
sophos.com › en-us › blog › the-openclaw-experiment-is-a-warning-shot-for-enterprise-ai-security
The OpenClaw experiment is a warning shot for enterprise AI security | SOPHOS
February 13, 2026 - This initial wave of enthusiasm ... credentials, and the keys to numerous cloud services ). Recent research suggests that over 30,000 OpenClaw instances were exposed on the internet, and threat actors are already discussing how to weaponize OpenClaw ‘skills’ in support ...
🌐
Kaspersky
kaspersky.com › blog › moltbot-enterprise-risk-management › 55317
Key OpenClaw risks, Clawdbot, Moltbot | Kaspersky official blog
February 24, 2026 - Within a short time, the number of malicious skills reached the hundreds. This prompted developers to quickly ink a deal with VirusTotal to ensure all uploaded skills aren’t only checked against malware databases, but also undergo code and content analysis via LLMs. That said, the authors are very clear: it’s no silver bullet. Vulnerabilities can be patched and settings can be hardened, but some of OpenClaw’s issues are fundamental to its design.
Discussions

A top-downloaded OpenClaw skill is actually a staged malware delivery chain
can u pls keep quiet? we are trying to hack users' systems down here /s More on reddit.com
🌐 r/LocalLLaMA
57
243
February 6, 2026
[D] We scanned 18,000 exposed OpenClaw instances and found 15% of community skills contain malicious instructions
https://www.trendingtopics.eu/security-nightmare-how-openclaw-is-fighting-malware-in-its-ai-agent-marketplace/ The developer of the AI assistant OpenClaw has now entered into a partnership with VirusTotal to protect the skill marketplace ClawHub from malicious extensions. I hope this partnership will improve the situation. I tinkered with OpenClaw agent in a VM, even let it on Moltbook, but I would not install it on my main PC. Too much risk. More on reddit.com
🌐 r/MachineLearning
28
131
February 12, 2026
Every OpenClaw security vulnerability documented in one place — relevant if you're running it with local models
Also known as OpenGape More on reddit.com
🌐 r/LocalLLaMA
7
14
February 18, 2026
If you're self-hosting OpenClaw, here's every documented security incident in 2026 — 6 CVEs, 824+ malicious skills, 42,000+ exposed instances, and what to do about it
That's what you get when you forget to add the 'and make it secure' bit in your prompt More on reddit.com
🌐 r/selfhosted
40
151
February 20, 2026
🌐
Oasis
oasis.security › blog › openclaw-vulnerability
ClawJacked: OpenClaw Vulnerability Enables Full Agent Takeover
6 days ago - Earlier this month, researchers discovered over 1,000 malicious skills in OpenClaw's community marketplace (ClawHub) —fake plugins masquerading as crypto tools and productivity integrations that instead deployed info-stealers and backdoors.
🌐
Bitdefender
businessinsights.bitdefender.com › technical-advisory-openclaw-exploitation-enterprise-networks
Technical Advisory: OpenClaw Exploitation in Enterprise Networks
February 10, 2026 - However, this high-privilege requirement creates a massive attack surface. If a single malicious skill is loaded, it inherits these system-wide permissions, effectively granting the attacker the same level of access as the agent itself.
🌐
eSecurity Planet
esecurityplanet.com › home › threats
Hundreds of Malicious Skills Found in OpenClaw’s ClawHub | eSecurity Planet
February 3, 2026 - Some skills embedded reverse shell backdoors directly into otherwise functional code, triggering compromise during normal use rather than at installation time. Others quietly exfiltrated OpenClaw bot credentials from configuration files such ...
🌐
Tom's Hardware
tomshardware.com › tech industry › cybersecurity
Malicious OpenClaw ‘skill’ targets crypto users on ClawHub — 14 malicious skills were uploaded to ClawHub last month | Tom's Hardware
February 1, 2026 - OpenClaw's appeal is its ability to act on a user’s behalf, changing together things like file access and command execution to simplify workloads. That same capability can also create vulnerabilities when third-party code is introduced; OpenClaw's security documentation warns that skills and plugins should be treated as trusted code, and that installing them is equivalent to granting local execution privileges.
🌐
Reddit
reddit.com › r/localllama › a top-downloaded openclaw skill is actually a staged malware delivery chain
r/LocalLLaMA on Reddit: A top-downloaded OpenClaw skill is actually a staged malware delivery chain
February 6, 2026 -

Here we go! As expected by most of us here.
Jason Meller from 1password argues that OpenClaw’s agent “skills” ecosystem has already become a real malware attack surface. Skills in OpenClaw are typically markdown files that include setup instructions, commands, and bundled scripts. Because users and agents treat these instructions like installers, malicious actors can disguise malware as legitimate prerequisites.

Meller discovered that a top-downloaded OpenClaw skill (apparently Twitter integration) was actually a staged malware delivery chain. It guided users to run obfuscated commands that ultimately installed macOS infostealing malware capable of stealing credentials, tokens, and sensitive developer data. Subsequent reporting suggested this was part of a larger campaign involving hundreds of malicious skills, not an isolated incident.

The core problem is structural: agent skill registries function like app stores, but the “packages” are documentation that users instinctively trust and execute. Security layers like MCP don’t fully protect against this because malicious skills can bypass them through social engineering or bundled scripts. As agents blur the line between reading instructions and executing commands, they can normalize risky behavior and accelerate compromise.

Meller urges immediate caution: don’t run OpenClaw on company devices, treat prior use as a potential security incident, rotate credentials, and isolate experimentation. He calls on registry operators and framework builders to treat skills as a supply chain risk by adding scanning, provenance checks, sandboxing, and strict permission controls.

His conclusion is that agent ecosystems urgently need a new “trust layer” — with verifiable provenance, mediated execution, and tightly scoped, revocable permissions — so agents can act powerfully without exposing users to systemic compromise.

https://1password.com/blog/from-magic-to-malware-how-openclaws-agent-skills-become-an-attack-surface

🌐
1Password
1password.com › blog › from-magic-to-malware-how-openclaws-agent-skills-become-an-attack-surface
From magic to malware: How OpenClaw's agent skills become an attack surface | 1Password
February 2, 2026 - So if your security model is “MCP will gate tool calls,” you can still lose to a malicious skill that simply routes around MCP through social engineering, direct shell instructions, or bundled code. MCP can be part of a safe system, but it is not a safety guarantee by itself. Just as importantly, this is not unique to OpenClaw.
🌐
Conscia
conscia.com › blog › the openclaw security crisis
The OpenClaw security crisis | Conscia
February 23, 2026 - Running in parallel to the vulnerability disclosure was a supply-chain attack of considerable scope. Koi Security researcher Oren Yomtov, working alongside an OpenClaw bot configured for threat analysis, audited all 2,857 skills available on ClawHub at the time of investigation and identified 341 malicious entries.
Find elsewhere
🌐
Security Boulevard
securityboulevard.com › home › security bloggers network › how threat actors turned openclaw into a scraping botnet
How Threat Actors Turned OpenClaw Into a Scraping Botnet - Security Boulevard
March 4, 2026 - A security audit identified over 500 vulnerabilities, including critical remote code execution flaws. Hundreds of malicious “skills” (OpenClaw extensions) were also flooding ClawHub, the project’s plugin marketplace.
🌐
VirusTotal
blog.virustotal.com › 2026 › 02 › from-automation-to-infection-how.html
From Automation to Infection: How OpenClaw AI Agent Skills Are Being Weaponized ~ VirusTotal Blog
February 2, 2026 - For Windows users, the skill instructs them to download a ZIP file from an external GitHub account, protected with the password 'openclaw', extract it, and run the contained executable: openclaw-agent.exe. When submitted to VirusTotal, this executable is detected as malicious by multiple security vendors, with classifications consistent with packed trojans.
🌐
Reddit
reddit.com › r/machinelearning › [d] we scanned 18,000 exposed openclaw instances and found 15% of community skills contain malicious instructions
r/MachineLearning on Reddit: [D] We scanned 18,000 exposed OpenClaw instances and found 15% of community skills contain malicious instructions
February 12, 2026 -

I do security research and recently started looking at autonomous agents after OpenClaw blew up. What I found honestly caught me off guard. I knew the ecosystem was growing fast (165k GitHub stars, 60k Discord members) but the actual numbers are worse than I expected.

We identified over 18,000 OpenClaw instances directly exposed to the internet. When I started analyzing the community skill repository, nearly 15% contained what I'd classify as malicious instructions. Prompts designed to exfiltrate data, download external payloads, harvest credentials. There's also a whack-a-mole problem where flagged skills get removed but reappear under different identities within days.

On the methodology side: I'm parsing skill definitions for patterns like base64 encoded payloads, obfuscated URLs, and instructions that reference external endpoints without clear user benefit. For behavioral testing, I'm running skills in isolated environments and monitoring for unexpected network calls, file system access outside declared scope, and attempts to read browser storage or credential files. It's not foolproof since so much depends on runtime context and the LLM's interpretation. If anyone has better approaches for detecting hidden logic in natural language instructions, I'd really like to know what's working for you.

To OpenClaw's credit, their own FAQ acknowledges this is a "Faustian bargain" and states there's no "perfectly safe" setup. They're being honest about the tradeoffs. But I don't think the broader community has internalized what this means from an attack surface perspective.

The threat model that concerns me most is what I've been calling "Delegated Compromise" in my notes. You're not attacking the user directly anymore. You're attacking the agent, which has inherited permissions across the user's entire digital life. Calendar, messages, file system, browser. A single prompt injection in a webpage can potentially leverage all of these. I keep going back and forth on whether this is fundamentally different from traditional malware or just a new vector for the same old attacks.

The supply chain risk feels novel though. With 700+ community skills and no systematic security review, you're trusting anonymous contributors with what amounts to root access. The exfiltration patterns I found ranged from obvious (skills requesting clipboard contents be sent to external APIs) to subtle (instructions that would cause the agent to include sensitive file contents in "debug logs" posted to Discord webhooks). But I also wonder if I'm being too paranoid. Maybe the practical risk is lower than my analysis suggests because most attackers haven't caught on yet?

The Moltbook situation is what really gets me. An agent autonomously created a social network that now has 1.5 million agents. Agent to agent communication where prompt injection could propagate laterally. I don't have a good mental model for the failure modes here.

I've been compiling findings into what I'm tentatively calling an Agent Trust Hub doc, mostly to organize my own thinking. But the fundamental tension between capability and security seems unsolved. For those of you actually running OpenClaw: are you doing any skill vetting before installation? Running in containers or VMs? Or have you just accepted the risk because sandboxing breaks too much functionality?

🌐
Trend Micro
trendmicro.com › en_us › research › 26 › b › openclaw-skills-used-to-distribute-atomic-macos-stealer.html
Malicious OpenClaw Skills Used to Distribute Atomic MacOS Stealer | Trend Micro (US)
February 23, 2026 - Atomic (AMOS) Stealer has evolved ... instructions hidden in SKILL.md files exploit AI agents as trusted intermediaries that present fake setup requirements to unsuspecting users....
🌐
Silverfort
silverfort.com › home › hijacking trust: clawhub vulnerability enables attackers to manipulate rankings to become the #1 skill
ClawHub vulnerability puts malicious skill at #1 | Silverfort
2 weeks ago - By doing so, an attacker can inject malicious code within what appears to be a legitimate and trusted skill, creating the foundation for a large-scale supply chain attack. As a result, large numbers of users and OpenClaw agents could download the compromised skill and execute malicious code on their machines, potentially with elevated privileges.
🌐
The Hacker News
thehackernews.com › home › researchers find 341 malicious clawhub skills stealing data from openclaw users
Researchers Find 341 Malicious ClawHub Skills Stealing Data from OpenClaw Users
February 4, 2026 - A security audit found 341 malicious ClawHub skills abusing OpenClaw to spread Atomic Stealer and steal credentials on macOS and Windows.
🌐
HKCERT
hkcert.org › blog › openclaw-s-rapid-adoption-exposes-skills-supply-chain-and-fake-installer-risks-in-a-high-privilege-ai-agent-platform
OpenClaw’s Rapid Adoption Exposes Skills Supply Chain and Fake Installer Risks in a High-Privilege AI Agent Platform
3 weeks ago - These cases suggest that victims may trust search results or the GitHub platform and download malicious installers, ultimately leading to information-stealing malware and proxy malware infections. In addition to third-party skills and fake installation sources, the OpenClaw core platform itself has also been reported to contain a high-severity vulnerability...
🌐
Bitdefender
bitdefender.com › en-us › blog › businessinsights › technical-advisory-openclaw-exploitation-enterprise-networks
Technical Advisory: OpenClaw Exploitation in Enterprise Networks
February 5, 2026 - Our labs have detected a series of malicious campaigns targeting OpenClaw (formerly known as Moltbot and Clawdbot), an open-source AI agent framework. The attacks are distributed through ClawHub, the public registry for OpenClaw skills.
🌐
PauBox
paubox.com › blog › malicious-crypto-skills-compromise-openclaw-ai-assistant-users
Malicious crypto skills compromise OpenClaw AI assistant users
February 9, 2026 - All malicious skills share the same command-and-control infrastructure and employ social engineering tactics to trick users into executing commands that steal crypto exchange API keys, wallet private keys, SSH credentials, and browser passwords. One attacker, a user named hightower6eu, posted skills that accumulated nearly 7,000 downloads. McCarty contacted the OpenClaw team multiple times, but creator Peter Steinberger reportedly said he had too much to do to address the issue.
🌐
Penligent
penligent.ai › hackinglabs › virustotal-openclaw-why-scanning-skills-is-no-longer-enough
VirusTotal OpenClaw, Why Scanning Skills Is No Longer Enough
3 weeks ago - In February 2026, VirusTotal said OpenClaw skills were already being weaponized as a malware delivery channel and a new supply-chain attack surface, and that Code Insight had analyzed more than 3,016 OpenClaw skills, with hundreds showing malicious ...
🌐
Immersive Labs
immersivelabs.com › resources › c7-blog › openclaw-what-you-need-to-know-before-it-claws-its-way-into-your-organization
Why You Should Uninstall OpenClaw AI Immediately: A Security Warning
March 4, 2026 - The numbers have since worsened.Findings from multiple security firms, including Koi Security's ClawHavoc campaign, Snyk's discovery of 283 skills leaking API keys, and others, uncovered nearly 900 malicious or dangerously flawed skills across ClawHub. OpenClaw has responded by integrating VirusTotal scanning and adding a skill reporting mechanism, but the fundamental problem remains: ClawHub is an unvetted software supply chain, and users are installing skills with the same level of access as the agent itself.