AI agent gateways like OpenClaw feel like a portal to a future that, even a few months ago, felt impossibly distant. That future is genuinely transformative, but it’s also scary in some very specific ways.
OpenClaw works due to a kind of Faustian bargain. It is compelling precisely because it has real access to your local machine, your apps, your browser sessions, your files, and often long-term memory. But that level of access means there isn’t yet a safe way to run it on a machine that holds corporate credentials or has access to production systems.
Most importantly, OpenClaw teaches us about the risks of uncontrolled AI agents in general.
The plaintext problem
OpenClaw’s memory and configuration are not abstract concepts. They are readable files that live in predictable locations on disk. And they are plain text.
If an attacker compromises the same machine you run OpenClaw on, they do not need to do anything fancy. Modern infostealers scrape common directories and exfiltrate anything that looks like credentials, tokens, session logs, or developer config. If your agent stores in plain-text API keys, webhook tokens, transcripts, and long-term memory in known locations, an infostealer can grab the whole thing in seconds.
And unfortunately, agent ecosystems like OpenClaw’s also put users at a greater risk of downloading an infostealer.
Malicious skills
In the OpenClaw ecosystem, a “skill” is often a markdown file: a page of instructions that tells an agent how to do a specialized task. In practice, markdown is an installer for agents.
While browsing ClawHub, I noticed the top downloaded skill at the time was a “Twitter” skill. It looked normal – the kind of thing you’d expect to install without a second thought.
However, it was a malware delivery vehicle. I downloaded the final binary safely and submitted it to VirusTotal, and the verdict was unambiguous: it was flagged as macOS infostealing malware.
This is the type of malware that doesn’t just “infect your computer.” It raids everything valuable on that device:
- Browser sessions and cookies
- Saved credentials and autofill data
- Developer tokens and API keys
- SSH keys
- Cloud credentials
- Anything else that can be turned into an account takeover
The problem goes beyond OpenClaw
This issue is not unique to OpenClaw. Many agents are adopting the open Agent Skills format, in which a skill is a folder centered on a SKILL.md file with metadata and freeform instructions, and it can also bundle scripts and other resources. Even OpenAI’s documentation describes the same basic shape: a SKILL.md file plus optional scripts and assets.
That means a malicious “skill” is not just an OpenClaw problem. It is a distribution mechanism that can travel across any agent ecosystem that supports the same standard.
What makes this worse than a typical credential leak is the context.
A single stolen API token is bad. Hundreds of stolen tokens and sessions for the critical services in your life is even worse.
But a hundred stolen tokens and sessions, plus a long-term memory file that describes who you are, what you’re building, how you write, who you work with, and what you care about, is something else entirely. It’s the raw material needed to phish you, blackmail you, or even fully impersonate you in a way that even your closest friends and family can’t detect.
What companies should do right now
If you are experimenting with OpenClaw, do not do it on a company device. Full stop. OpenClaw is a tool that, for now, forgoes an essential constraint: security. The project’s FAQ presents the Faustian bargain plainly: “There is no ‘perfectly secure’ setup.”
But the long-term answer is not to stop building agents. The answer is to build the missing trust layer around them. Skills need provenance. Execution needs mediation. Permissions need to be specific, revocable, and continuously enforced, not granted once and forgotten.
If agents are going to act on our behalf, credentials and sensitive actions cannot be “grabbed” by whatever code happens to run. They need to be brokered, governed, and audited in real time.
This is exactly why we need that next layer: when “skills” become the supply chain, the only safe future is one in which every agent has its own identity and has the minimum authority it needs right now, with access that is time-bound, revocable, and attributable.
That future does not exist today, but the work to make it real and safe is already underway. 1Password is determined to be the company that makes that possible.
