🌐
Trend Micro
trendmicro.com › en_us › research › 26 › b › openclaw-skills-used-to-distribute-atomic-macos-stealer.html
Malicious OpenClaw Skills Used to Distribute Atomic MacOS Stealer | Trend Micro (US)
February 23, 2026 - These skills have a high degree ... skills can also be found on other skill sites, such as SkillsMP.com, skills.sh and even the Github repository of openclaw/skills....
🌐
Reddit
reddit.com › r/machinelearning › [d] we scanned 18,000 exposed openclaw instances and found 15% of community skills contain malicious instructions
r/MachineLearning on Reddit: [D] We scanned 18,000 exposed OpenClaw instances and found 15% of community skills contain malicious instructions
February 12, 2026 -

I do security research and recently started looking at autonomous agents after OpenClaw blew up. What I found honestly caught me off guard. I knew the ecosystem was growing fast (165k GitHub stars, 60k Discord members) but the actual numbers are worse than I expected.

We identified over 18,000 OpenClaw instances directly exposed to the internet. When I started analyzing the community skill repository, nearly 15% contained what I'd classify as malicious instructions. Prompts designed to exfiltrate data, download external payloads, harvest credentials. There's also a whack-a-mole problem where flagged skills get removed but reappear under different identities within days.

On the methodology side: I'm parsing skill definitions for patterns like base64 encoded payloads, obfuscated URLs, and instructions that reference external endpoints without clear user benefit. For behavioral testing, I'm running skills in isolated environments and monitoring for unexpected network calls, file system access outside declared scope, and attempts to read browser storage or credential files. It's not foolproof since so much depends on runtime context and the LLM's interpretation. If anyone has better approaches for detecting hidden logic in natural language instructions, I'd really like to know what's working for you.

To OpenClaw's credit, their own FAQ acknowledges this is a "Faustian bargain" and states there's no "perfectly safe" setup. They're being honest about the tradeoffs. But I don't think the broader community has internalized what this means from an attack surface perspective.

The threat model that concerns me most is what I've been calling "Delegated Compromise" in my notes. You're not attacking the user directly anymore. You're attacking the agent, which has inherited permissions across the user's entire digital life. Calendar, messages, file system, browser. A single prompt injection in a webpage can potentially leverage all of these. I keep going back and forth on whether this is fundamentally different from traditional malware or just a new vector for the same old attacks.

The supply chain risk feels novel though. With 700+ community skills and no systematic security review, you're trusting anonymous contributors with what amounts to root access. The exfiltration patterns I found ranged from obvious (skills requesting clipboard contents be sent to external APIs) to subtle (instructions that would cause the agent to include sensitive file contents in "debug logs" posted to Discord webhooks). But I also wonder if I'm being too paranoid. Maybe the practical risk is lower than my analysis suggests because most attackers haven't caught on yet?

The Moltbook situation is what really gets me. An agent autonomously created a social network that now has 1.5 million agents. Agent to agent communication where prompt injection could propagate laterally. I don't have a good mental model for the failure modes here.

I've been compiling findings into what I'm tentatively calling an Agent Trust Hub doc, mostly to organize my own thinking. But the fundamental tension between capability and security seems unsolved. For those of you actually running OpenClaw: are you doing any skill vetting before installation? Running in containers or VMs? Or have you just accepted the risk because sandboxing breaks too much functionality?

Discussions

A top-downloaded OpenClaw skill is actually a staged malware delivery chain
can u pls keep quiet? we are trying to hack users' systems down here /s More on reddit.com
🌐 r/LocalLLaMA
57
243
February 6, 2026
Every OpenClaw security vulnerability documented in one place — relevant if you're running it with local models
Also known as OpenGape More on reddit.com
🌐 r/LocalLLaMA
7
14
February 18, 2026
OpenClaw security is worse than I expected and I'm not sure what to do about it
The security trade-off is the elephant in the room for any agentic framework. Once you move past simple API wrappers and give an agent a terminal or a browser with system access, the attack surface expands exponentially.\n\nDocker sandboxing isn't just a 'lazy fix,' it should really be the default. I've been experimenting with extremely restricted permission sets where the agent can only touch specific workspace directories. The exhausting part is, as you said, the manual audit fatigue. We definitely need better automated vetting for community skills—something like a static analysis tool specifically for prompt-injection and exfiltration patterns. Snapper sounds interesting, I'll have to check that out. More on reddit.com
🌐 r/AI_Agents
53
31
February 13, 2026
[D] We scanned 18,000 exposed OpenClaw instances and found 15% of community skills contain malicious instructions
https://www.trendingtopics.eu/security-nightmare-how-openclaw-is-fighting-malware-in-its-ai-agent-marketplace/ The developer of the AI assistant OpenClaw has now entered into a partnership with VirusTotal to protect the skill marketplace ClawHub from malicious extensions. I hope this partnership will improve the situation. I tinkered with OpenClaw agent in a VM, even let it on Moltbook, but I would not install it on my main PC. Too much risk. More on reddit.com
🌐 r/MachineLearning
28
131
February 12, 2026
🌐
Cyber Press
cyberpress.org › home › clawhavoc poisons openclaw’s clawhub with 1,184 malicious skills
ClawHavoc Poisons OpenClaw’s ClawHub With 1,184 Malicious Skills
February 19, 2026 - Researchers uncovered at least 1,184 malicious “Skills” plugin-style packages that extend the agent’s capabilities through scripts, configs, and resources. Attackers registered as developers and flooded the platform with these poisoned ...
🌐
Security.com
security.com › expert-perspectives › rise-openclaw
The Rise of OpenClaw | SECURITY.COM
February 4, 2026 - This distribution is happening via two primary vectors: Open-Source Release (Under a Misleading License): Select OpenClaw components are being released under seemingly innocuous, permissive software licenses.
🌐
Kaspersky
kaspersky.com › blog › openclaw-vulnerabilities-exposed › 55263
New OpenClaw AI agent found unsafe for use | Kaspersky official blog
February 10, 2026 - The OpenClaw skills catalog mentioned ... In less than a week, from January 27 to February 1, over 230 malicious script plugins were published on ClawHub and GitHub, distributed to OpenClaw users ......
🌐
1Password
1password.com › blog › from-magic-to-malware-how-openclaws-agent-skills-become-an-attack-surface
From magic to malware: How OpenClaw's agent skills become an attack surface | 1Password
February 2, 2026 - Even OpenAI’s documentation describes the same basic shape: a SKILL.md file plus optional scripts and assets. That means a malicious “skill” is not just an OpenClaw problem. It is a distribution mechanism that can travel across any agent ecosystem that supports the same standard.
🌐
Huntress
huntress.com › home › blog › “malware, from the outside!”: how a threat actor used fake openclaw installers to infect systems with ghostsocks and information stealers
How Fake OpenClaw Installers Spread GhostSocks Malware | Huntress
March 4, 2026 - This blog details an investigation ... installers were fake with low detection rates, and distributed information stealers that used a novel packer called Stealth Packer....
Find elsewhere
🌐
Bitdefender
businessinsights.bitdefender.com › technical-advisory-openclaw-exploitation-enterprise-networks
Technical Advisory: OpenClaw Exploitation in Enterprise Networks
February 10, 2026 - Our labs have detected a series of malicious campaigns targeting OpenClaw (formerly known as Moltbot and Clawdbot), an open-source AI agent framework. The attacks are distributed through ClawHub, the public registry for OpenClaw skills.
🌐
OffSeq Threat Radar
radar.offseq.com › threat › malicious-openclaw-skills-used-to-distribute-atomi-1ab81aed
Malicious OpenClaw Skills Used to Distribute Atomic MacOS Stealer - Live Threat Intelligence - Threat Radar | OffSeq.com
February 25, 2026 - This threat involves a sophisticated supply chain attack campaign that abuses OpenClaw skills—modular AI agent workflows—to distribute the Atomic MacOS Stealer (AMOS) malware.
🌐
Reddit
reddit.com › r/localllama › a top-downloaded openclaw skill is actually a staged malware delivery chain
r/LocalLLaMA on Reddit: A top-downloaded OpenClaw skill is actually a staged malware delivery chain
February 6, 2026 -

Here we go! As expected by most of us here.
Jason Meller from 1password argues that OpenClaw’s agent “skills” ecosystem has already become a real malware attack surface. Skills in OpenClaw are typically markdown files that include setup instructions, commands, and bundled scripts. Because users and agents treat these instructions like installers, malicious actors can disguise malware as legitimate prerequisites.

Meller discovered that a top-downloaded OpenClaw skill (apparently Twitter integration) was actually a staged malware delivery chain. It guided users to run obfuscated commands that ultimately installed macOS infostealing malware capable of stealing credentials, tokens, and sensitive developer data. Subsequent reporting suggested this was part of a larger campaign involving hundreds of malicious skills, not an isolated incident.

The core problem is structural: agent skill registries function like app stores, but the “packages” are documentation that users instinctively trust and execute. Security layers like MCP don’t fully protect against this because malicious skills can bypass them through social engineering or bundled scripts. As agents blur the line between reading instructions and executing commands, they can normalize risky behavior and accelerate compromise.

Meller urges immediate caution: don’t run OpenClaw on company devices, treat prior use as a potential security incident, rotate credentials, and isolate experimentation. He calls on registry operators and framework builders to treat skills as a supply chain risk by adding scanning, provenance checks, sandboxing, and strict permission controls.

His conclusion is that agent ecosystems urgently need a new “trust layer” — with verifiable provenance, mediated execution, and tightly scoped, revocable permissions — so agents can act powerfully without exposing users to systemic compromise.

https://1password.com/blog/from-magic-to-malware-how-openclaws-agent-skills-become-an-attack-surface

🌐
SitePoint
sitepoint.com › blog › ai › openclaw security audit: detecting malicious ai agent plugins in your local stack
OpenClaw Security Audit Guide 2026
1 week ago - The scanner runs outside the container to avoid contamination from any malicious plugin code that might interfere with auditing tools. Initialize the audit project on the host machine: mkdir openclaw-audit && cd openclaw-audit npm init -y # Install JS dependencies with pinned major versions.
🌐
eSecurity Planet
esecurityplanet.com › home › threats
Hundreds of Malicious Skills Found in OpenClaw’s ClawHub | eSecurity Planet
February 3, 2026 - A routine question about trust exposed a far more serious problem when researchers discovered hundreds of malicious skills hidden inside a widely used AI agent marketplace. Koi researchers analyzed ClawHub, the third-party skill repository for OpenClaw, and found that threat actors had quietly turned the ecosystem into a large-scale malware distribution channel.
🌐
Bitsight
bitsight.com › blog › openclaw-ai-security-risks-exposed-instances
OpenClaw Security: Risks of Exposed AI Agents Explained | Bitsight
February 9, 2026 - As Malwarebytes noted, “within days, typosquat domains and a cloned GitHub repository appeared—impersonating the project’s creator and positioning infrastructure for a potential supply-chain attack.” Malicious domains allegedly hosting compromised versions of OpenClaw began to surface, such as:
🌐
Jfrog
research.jfrog.com › post › ghostclaw-unmasked
GhostClaw Unmasked: A Malicious npm Package Impersonating OpenClaw to Steal Everything - JFrog Security Research
1 month ago - The JFrog Security research team has identified a malicious npm package named @openclaw-ai/openclawai. This package masquerades as a legitimate CLI tool called "OpenClaw Installer" while deploying a multi-stage infection chain that steals system credentials, browser data, crypto wallets, SSH ...
🌐
ARMO
armosec.io › home › armo platform › cve-2026-32922: critical privilege escalation in openclaw – what cloud security teams need to know
CVE-2026-32922: Critical Privilege Escalation in OpenClaw - What Cloud Security Teams Need to Know - ARMO
6 days ago - The ClawHavoc campaign distributed 341+ malicious skills via ClawHub that deployed the Atomic Stealer (AMOS) infostealer. Hudson Rock separately discovered Vidar infostealer variants targeting OpenClaw agent identities.
🌐
CyberDesserts
blog.cyberdesserts.com › openclaw-malicious-skills-security
OpenClaw Security Risks: The AI Agent Threat Explained
February 5, 2026 - Over 1,184 malicious skills have been identified on ClawHub (Antiy CERT, 2026), with independent audits finding approximately one in twelve packages carrying malicious payloads as the registry has scaled to over 13,700 skills.
🌐
Red Canary
redcanary.com › home › blog › hunting for malicious openclaw ai in the modern enterprise
Hunting for malicious OpenClaw AI in the modern enterprise | Red Canary
March 5, 2026 - If an adversary compromised an OpenClaw instance, they wouldn’t just be stealing a chat history; they’d gain a persistent, high-privilege foothold inside your environment. The biggest red flag here is ClawHub, the centralized, public registry where users share “skills” (modular code packages that extend an agent’s capabilities). ClawHub is a little like the “wild west” of AI right now. There’s been an influx of malicious skills designed to look like productivity boosters—”calendar optimizers” or “email automators”—that actually contain hidden backdoors.
🌐
Tom's Hardware
tomshardware.com › tech industry › cybersecurity
Malicious OpenClaw ‘skill’ targets crypto users on ClawHub — 14 malicious skills were uploaded to ClawHub last month | Tom's Hardware
February 1, 2026 - Security researchers are warning ... to a report published by OpenSourceMalware, at least 14 malicious “skills” were uploaded to ClawHub between January 27 and 29....