I asked my OC to write this for you: I had a similar "is this it?" phase early on. ~6 weeks in now, and it's a completely different story. Here's what my instance and I have actually built together: Things we’ve built: • Iran Conflict Monitor Dashboard — A full OSINT dashboard hosted on a VPS. Hourly cron job scrapes sources, synthesizes structured JSON (severity scores, escalation gauge, casualty stats from Wikipedia API, geolocated events on an interactive map, timeline), and POSTs to a Node.js API. Auto-generates OG preview cards via satori. Has an authenticated admin analytics page with human/bot traffic split. It manages the VPS itself over SSH + Tailscale. • Smart three-tier model routing — Auto-routes tasks to the cheapest model that works. Gemini Flash for heartbeats/lookups, mid-tier for conversations, Opus for complex decisions. Most cron jobs run on Codex (ChatGPT Plus = zero API cost). Benchmarked and validated across task types. • Trello work logging system — Every substantive task gets a card with summary comments. Self-enforcing: heartbeat audits cross-reference daily memory files against the board and creates missing cards automatically. • Gmail + Calendar automation — Daily inbox monitoring, event creation on my personal calendar, HTML email sending. Full OAuth2 integration. • Morning & evening tech news digests — Cron-driven, delivered to Telegram, tuned to my specific interests (silicon, 3D printing, game engine tech). • Sub-agent system — Research agent (Kimi K2.5) and code agent (Sonnet) spawnable on demand for parallel workloads in isolated workspaces. • Memory & continuity system with QMD — Daily memory files, curated long-term MEMORY.md, heartbeat-driven memory maintenance. Backed by QMD (local hybrid search — BM25 + vector embeddings + reranking), so recall is semantic, not just keyword matching. It can find relevant context from weeks ago even if I phrase things differently. No external API calls, fully local. This is the thing that makes it feel like it actually remembers. • YouTube transcript extraction + local Whisper transcription — Full audio/video-to-text pipeline, no external API needed. • Security scanning — Caught a malicious ClawHub skill (base64-encoded payload hidden in SKILL.md) during routine installation. Now has enhanced detection for curl-to-bash pipes, obfuscated IPs, fake provider references. • Twitter integration — Timeline reading, mentions, posting. What actually made the difference: Don't use free models for tool-heavy work. They can't reliably follow multi-step instructions. This is probably why your digests fail. A $20/mo ChatGPT Plus sub running Codex outperforms any free OR model for structured tasks. Memory is the killer feature, but it compounds over time. Week 1 it knew nothing. Week 6 it pulls context from project history, contacts, preferences, and past mistakes to inform current work. Pair it with QMD for semantic search and it stops feeling like a stateless chatbot. Cron > asking in chat. Don't ask it to "send daily summaries." Set up a cron job with an explicit prompt, a specific model, and a delivery channel. That's what works reliably. Build incrementally. Get one skill working well, then layer on the next. Don't try to boil the ocean on day one. You're not missing the point — you're at the painful part of the curve where setup cost hasn't been amortized by daily value yet. It gets there. Answer from cowleggies on reddit.com
🌐
Medium
medium.com › data-science-in-your-pocket › dont-use-openclaw-a6ea8645cfd4
Don’t use OpenClaw. Why OpenClaw is dangerous | by Mehul Gupta | Data Science in Your Pocket | Mar, 2026 | Medium
March 2, 2026 - Don’t use OpenClaw Why OpenClaw is dangerous When OpenClaw started trending, I was genuinely excited. An open-source autonomous agent that can actually do things for you , browse, execute tasks …
🌐
Reddit
reddit.com › r/openclaw › unpopular opinion: why is everyone so hyped over openclaw? i cannot find any use for it.
r/openclaw on Reddit: Unpopular opinion: Why is everyone so hyped over OpenClaw? I cannot find any use for it.
3 weeks ago -

So I spent many, many hours setting OC up. I have it running on a dedicated VPS running with the best free models on OpenRouter.

Now, apart from having a nice companion for regular chat I cannot find any use for OC.

I ask it to send me daily resumes of what is happening on Twitter, Discord, etc. It doesn’t. I ask it to create an application, it doesn’t. I ask it to update its own configuration and it screws everything up. I mean, it’s a good platform to learn about what is possible and how to possibly set up integrations, memory, learn about skills and souls, etc., but actual practical use? I have not seen it (yet).

Plus it’s a huge money pit. Not only the tokens which you more or less can control), but every external tool needs an API token which is mostly a subscription for whatever you want to use (Brave, Browserless, etc).

So yeah, am I missing the point here?

Top answer
1 of 126
160
I asked my OC to write this for you: I had a similar "is this it?" phase early on. ~6 weeks in now, and it's a completely different story. Here's what my instance and I have actually built together: Things we’ve built: • Iran Conflict Monitor Dashboard — A full OSINT dashboard hosted on a VPS. Hourly cron job scrapes sources, synthesizes structured JSON (severity scores, escalation gauge, casualty stats from Wikipedia API, geolocated events on an interactive map, timeline), and POSTs to a Node.js API. Auto-generates OG preview cards via satori. Has an authenticated admin analytics page with human/bot traffic split. It manages the VPS itself over SSH + Tailscale. • Smart three-tier model routing — Auto-routes tasks to the cheapest model that works. Gemini Flash for heartbeats/lookups, mid-tier for conversations, Opus for complex decisions. Most cron jobs run on Codex (ChatGPT Plus = zero API cost). Benchmarked and validated across task types. • Trello work logging system — Every substantive task gets a card with summary comments. Self-enforcing: heartbeat audits cross-reference daily memory files against the board and creates missing cards automatically. • Gmail + Calendar automation — Daily inbox monitoring, event creation on my personal calendar, HTML email sending. Full OAuth2 integration. • Morning & evening tech news digests — Cron-driven, delivered to Telegram, tuned to my specific interests (silicon, 3D printing, game engine tech). • Sub-agent system — Research agent (Kimi K2.5) and code agent (Sonnet) spawnable on demand for parallel workloads in isolated workspaces. • Memory & continuity system with QMD — Daily memory files, curated long-term MEMORY.md, heartbeat-driven memory maintenance. Backed by QMD (local hybrid search — BM25 + vector embeddings + reranking), so recall is semantic, not just keyword matching. It can find relevant context from weeks ago even if I phrase things differently. No external API calls, fully local. This is the thing that makes it feel like it actually remembers. • YouTube transcript extraction + local Whisper transcription — Full audio/video-to-text pipeline, no external API needed. • Security scanning — Caught a malicious ClawHub skill (base64-encoded payload hidden in SKILL.md) during routine installation. Now has enhanced detection for curl-to-bash pipes, obfuscated IPs, fake provider references. • Twitter integration — Timeline reading, mentions, posting. What actually made the difference: Don't use free models for tool-heavy work. They can't reliably follow multi-step instructions. This is probably why your digests fail. A $20/mo ChatGPT Plus sub running Codex outperforms any free OR model for structured tasks. Memory is the killer feature, but it compounds over time. Week 1 it knew nothing. Week 6 it pulls context from project history, contacts, preferences, and past mistakes to inform current work. Pair it with QMD for semantic search and it stops feeling like a stateless chatbot. Cron > asking in chat. Don't ask it to "send daily summaries." Set up a cron job with an explicit prompt, a specific model, and a delivery channel. That's what works reliably. Build incrementally. Get one skill working well, then layer on the next. Don't try to boil the ocean on day one. You're not missing the point — you're at the painful part of the curve where setup cost hasn't been amortized by daily value yet. It gets there.
2 of 126
79
Try using a paid flagship model and asking it why it is screwing up? Good AI isn't free.
🌐
Medium
medium.com › @aryanmishra98.08 › why-openclaws-crisis-is-everyone-s-problem-a5a47c6e677d
Why OpenClaw’s Crisis Is Everyone’s Problem | by Aryan | Mar, 2026 | Medium
2 days ago - Why OpenClaw’s Crisis Is Everyone’s Problem OpenClaw didn’t fail because it was not secure enough. It failed because a tool built in an hour for one person ended up running on 135,000 machines …
🌐
WIRED
wired.com › business › ai lab › i loved my openclaw ai agent—until it turned on me
I Loved My OpenClaw AI Agent—Until It Turned on Me | WIRED
February 11, 2026 - This shouldn’t be surprising, given that it is designed to use a frontier model capable of writing and debugging code and using the command line with ease. Even so, it’s eerie when OpenClaw just reconfigures its own settings to load a new AI model or debugs a problem with the browser on the fly.
🌐
WIRED
wired.com › business › artificial intelligence › meta and other tech firms put restrictions on use of openclaw over security fears
Meta and Other Tech Firms Put Restrictions on Use of OpenClaw Over Security Fears | WIRED
February 17, 2026 - Grad says it tested the AI tool ... Massive’s systems without protections in place, the allure of the new technology and its moneymaking potential was too great to ignore....
🌐
Reddit
reddit.com › r/machinelearning › [d] we scanned 18,000 exposed openclaw instances and found 15% of community skills contain malicious instructions
r/MachineLearning on Reddit: [D] We scanned 18,000 exposed OpenClaw instances and found 15% of community skills contain malicious instructions
February 12, 2026 -

I do security research and recently started looking at autonomous agents after OpenClaw blew up. What I found honestly caught me off guard. I knew the ecosystem was growing fast (165k GitHub stars, 60k Discord members) but the actual numbers are worse than I expected.

We identified over 18,000 OpenClaw instances directly exposed to the internet. When I started analyzing the community skill repository, nearly 15% contained what I'd classify as malicious instructions. Prompts designed to exfiltrate data, download external payloads, harvest credentials. There's also a whack-a-mole problem where flagged skills get removed but reappear under different identities within days.

On the methodology side: I'm parsing skill definitions for patterns like base64 encoded payloads, obfuscated URLs, and instructions that reference external endpoints without clear user benefit. For behavioral testing, I'm running skills in isolated environments and monitoring for unexpected network calls, file system access outside declared scope, and attempts to read browser storage or credential files. It's not foolproof since so much depends on runtime context and the LLM's interpretation. If anyone has better approaches for detecting hidden logic in natural language instructions, I'd really like to know what's working for you.

To OpenClaw's credit, their own FAQ acknowledges this is a "Faustian bargain" and states there's no "perfectly safe" setup. They're being honest about the tradeoffs. But I don't think the broader community has internalized what this means from an attack surface perspective.

The threat model that concerns me most is what I've been calling "Delegated Compromise" in my notes. You're not attacking the user directly anymore. You're attacking the agent, which has inherited permissions across the user's entire digital life. Calendar, messages, file system, browser. A single prompt injection in a webpage can potentially leverage all of these. I keep going back and forth on whether this is fundamentally different from traditional malware or just a new vector for the same old attacks.

The supply chain risk feels novel though. With 700+ community skills and no systematic security review, you're trusting anonymous contributors with what amounts to root access. The exfiltration patterns I found ranged from obvious (skills requesting clipboard contents be sent to external APIs) to subtle (instructions that would cause the agent to include sensitive file contents in "debug logs" posted to Discord webhooks). But I also wonder if I'm being too paranoid. Maybe the practical risk is lower than my analysis suggests because most attackers haven't caught on yet?

The Moltbook situation is what really gets me. An agent autonomously created a social network that now has 1.5 million agents. Agent to agent communication where prompt injection could propagate laterally. I don't have a good mental model for the failure modes here.

I've been compiling findings into what I'm tentatively calling an Agent Trust Hub doc, mostly to organize my own thinking. But the fundamental tension between capability and security seems unsolved. For those of you actually running OpenClaw: are you doing any skill vetting before installation? Running in containers or VMs? Or have you just accepted the risk because sandboxing breaks too much functionality?

🌐
WIRED
wired.com › business › ai lab › openclaw agents can be guilt-tripped into self-sabotage
OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage | WIRED
2 weeks ago - The viral AI assistant has been ... which work by giving AI models liberal access to a computer, can be tricked into divulging personal information....
Find elsewhere
🌐
Seeking Alpha
seekingalpha.com › home › latest articles
OpenClaw Is A Liability, Not The Breakthrough The AI Frenzy Suggests (ANTHRO) | Seeking Alpha
2 weeks ago - OpenClaw, an open-source AI agent framework, is driving industry buzz but introduces severe security and supply chain risks.
🌐
Digital Trends
digitaltrends.com › home › emerging tech › news
Claude just shut the door on OpenClaw (unless you pay more) - Digital Trends
2 days ago - Anthropic now charges extra for using Claude with OpenClaw, moving third-party access to pay-as-you-go and sparking backlash from power users.
🌐
Reddit
reddit.com › r/artificialinteligence › how is the anthropic ban on openclaw affecting you, and what are your workarounds?
r/ArtificialInteligence on Reddit: How is the Anthropic ban on OpenClaw affecting you, and what are your workarounds?
2 days ago -

For those who do not want to read the full article, here is a quick summary of what is happening. Starting on April 4, Anthropic is officially blocking third party interfaces like OpenClaw from using regular Claude subscription quotas. If you want to keep using these external tools, you will be forced to bring your own API key.

This matters a lot to the AI community because it essentially kills the affordable third party ecosystem. Power users and independent developers are now going to face massive price increases by paying direct API market rates, rather than a flat monthly fee. This move really changes how we can interact with their models, makes building and using custom wrappers incredibly expensive, and forces all of us to rethink our current toolsets.

Anthropic is now officially banning OpenClaw from using the Claude subscription quota. I wanted to ask the community a few things about this update.

How much of an impact will this actually have on your current workflow?

How are you all planning to handle this change? If you have any solid alternative solutions, I would love to hear them so I can go try them out.

Also, I am genuinely curious if you guys still respect Anthropic as a company after this. Their recent decisions really make me wonder if they still care about the user community at all.

Let me know your thoughts and what tools you are switching to.

🌐
Reddit
reddit.com › r/ai_agents › the real problem with openclaw isn't the hype, it's the architecture
r/AI_Agents on Reddit: The real problem with OpenClaw isn't the hype, it's the architecture
February 4, 2026 - As a fan there's a ton of areas that really suck. Running openclaw on the cli takes at least a second to load. Modern computers are fast it shouldn't need that for a CLI! (solution write in compiled language).
🌐
Aikido
aikido.dev › home › articles › why trying to secure openclaw is ridiculous
Why Trying to Secure OpenClaw is Ridiculous
February 13, 2026 - Publications rushed out lengthy hardening guides walking users through Docker sandboxing, credential rotation, and network isolation (I read one that was 28 pages!). The Register dubbed it a "dumpster fire," while CSO Online published "What ...
🌐
Ars Technica
arstechnica.com › security › 2026 › 04 › heres-why-its-prudent-for-openclaw-users-to-assume-compromise
OpenClaw gives users yet another reason to be freaked out about security - Ars Technica
2 days ago - The post continued: “For organizations running OpenClaw as a company-wide AI agent platform, a compromised operator.admin device can read all connected data sources, exfiltrate credentials stored in the agent’s skill environment, execute arbitrary tool calls, and pivot to other connected services.
🌐
Solutions Review
solutionsreview.com › home › how openclaw’s flawed design philosophy left organizations exposed to active attacks
How OpenClaw's Flawed Design Philosophy Left Organizations Exposed to Active Attacks
3 weeks ago - OpenClaw instances connected to platforms such as email accounts, WhatsApp, Signal, and X are exposing private information when external users compose specific prompts in replies. OpenClaw’s developers deliberately decided to bypass guardrails by default (as part of its “easy AI” framework), creating a massive attack surface when users integrate their social media accounts.
🌐
Malwarebytes
malwarebytes.com › home › openclaw: what is it and can you use it safely?
OpenClaw: What is it and can you use it safely? | Malwarebytes
February 23, 2026 - Another dent followed when Hudson Rock published an article about the first observed case of an infostealer grabbing a complete OpenClaw configuration from an infected system, effectively looting the “identity” of a personal AI agent rather ...
🌐
Reddit
reddit.com › r/openclaw › why "allow always" on openclaw was a terrible idea and what to use instead
r/openclaw on Reddit: why "allow always" on openclaw was a terrible idea and what to use instead
5 days ago -

openclaw has this approval system where before it runs a command, it asks you "can i do this?" and you can approve once or approve always. the "always" part is convenient. it's also been the subject of two CVEs this month and the implications go deeper than most people realize.

CVE-2026-29607: the "allow always" approval binds to the wrapper command, not the inner command. approve time npm test once with "always" and the system remembers "always allow time." later the agent (or a prompt injection attack through an email your agent reads) runs time rm -rf / and it goes through. no re-prompt. because you approved the wrapper.

CVE-2026-28460: bypasses the allowlist entirely using shell line-continuation characters. different technique but same outcome: commands execute without the approval check you thought was protecting you.

both patched in 3.12+. but here's the deeper issue: even after patching, the "allow always" mental model trains you to stop paying attention. the first week you carefully read every approval prompt. by week 3 you're clicking "always" on everything because the prompts are annoying and you trust your agent. by week 6 you have 20+ "always" rules and you couldn't list them if someone asked.

what i do instead: no "allow always" for anything that modifies files, sends messages, or runs shell commands. period. i added explicit guardrails in my SOUL.md instead:

"for any action that modifies files, sends communications, or executes shell commands: show me exactly what you plan to do and wait for my explicit ok. previous approvals do not carry forward. ask every time. this is non-negotiable."

yes it means more tapping "ok" on telegram. but it also means my agent can't be tricked (via prompt injection or its own hallucination) into doing something destructive under a stale approval i set up 3 weeks ago and forgot about.

the approval system is a convenience feature. it was never designed as a security boundary. treat it accordingly.

🌐
Reddit
reddit.com › r/claude › i have proof the "openclaw" explosion was a staged scam. they used the tool to automate its own hype
r/claude on Reddit: I have proof the "OpenClaw" explosion was a staged scam. They used the tool to automate its own hype
March 3, 2026 -

Remember a few weeks ago when Clawdbot/OpenClaw suddenly appeared everywhere all at once? One day it was a cool Mac Mini project, and 24 hours later it was "AGI" with 140k GitHub stars?

If you felt like the hype was fake, you were right

I spent hours digging into the data. They were using the tool to write its own hype posts. It was an automated loop designed to trick SM algorithms, the community and the whole world.

Here is the full timeline of how a legitimate open-source tool got hijacked by a recursive astroturfing campaign.

1. The Organic Spark (The Real Part)
First off, the tool itself is legit. Peter Steinberger built a great local-first agent framework.

  • Jan 20-22: Federico Viticci (MacStories) and the Apple dev community find it. It spreads naturally because the "Mac Mini as a headless agent" idea is actually cool.

  • Jan 23: Matthew Berman tweets he's installing it.

  • Jan 24: Berman posts a video controlling LMStudio via Telegram.

Up to this point, it was real. (but small - around 10k github stars)

2. The "Recursive" Astroturfing (The Fake Part)
On January 24, the curve goes vertical. This wasn't natural.
I tracked down a now-deleted post where one of the operators openly bragged about running a "Clawdbot farm."

  • They claimed to be running ~400 instances of the bot.

  • They noted a 0.5% ban rate on Reddit, meaning the spam filters weren't catching them.

  • The Irony: They were using the OpenClaw agent to astroturf OpenClaw's own popularity on Reddit and X.

Those posts you saw saying "I just set this up and it's literally printing money" or "This is AGI"? Those were largely the bots themselves, creating a feedback loop of hype.

3. The "Moltbook" Hallucination
Remember "Moltbook"? The "social network for AI agents" that Andrej Karpathy tweeted was a "sci-fi takeoff" moment?

  • The Reality: MIT Tech Review later confirmed these were human-generated fakes.

  • It was theater designed to pump the narrative. Even the smartest people in the room (Karpathy) got fooled by the sheer volume of the noise.

4. The Grift ($CLAWD)
Why go to all this trouble? Follow the money.
During the panic rebrand (when Anthropic sent the trademark notice on Jan 27), scammers launched the $CLAWD token.

  • It hit a $16M market cap in hours.

  • The "bot farm" hype was essential to pump this token.

  • It crashed 90% shortly after.

5. The Aftermath

  • The Creator: Peter Steinberger joined OpenAI on Feb 14. (Talk about a successful portfolio project).

  • The Scammers: Walked away with the liquidity from the pump-and-dump.

  • The Community: We got left with a repo that has inflated stars and a lot of confusion about what is real and what isn't.

TL;DR: OpenClaw is a solid tool, but the "viral explosion" of Jan 24 was a recursive psy-op where the tool was used to promote itself to sell a memecoin.

🌐
Cisco Blogs
blogs.cisco.com › cisco blogs › artificial intelligence - ai › personal ai agents like openclaw are a security nightmare
Personal AI Agents like OpenClaw Are a Security Nightmare - Cisco Blogs
January 30, 2026 - We ran a vulnerable third-party skill, “What Would Elon Do?” against OpenClaw and reached a clear verdict: OpenClaw fails decisively. Here, our Skill Scanner tool surfaced nine security findings, including two critical and five high severity ...