PodFlare-ai/openclaw-podflare.
Installing it gives the agent 8 sandbox tools (run_python,
fork(n), create_sandbox, etc.) pointing at the Podflare MCP
server — so any LLM-generated code runs in a hardware-isolated
Podflare Pod microVM instead of on the host machine.
Install
One line from ClawHub:~/.openclaw/workspace/skills/podflare/ and materializes it as an
active Claude-Code plugin.
Configure
Mint a free API key at dashboard.podflare.ai/keys ($200 starter credit, no credit card, 60 seconds). Then export it before you start OpenClaw:PODFLARE_API_KEY as its primaryEnv, so
OpenClaw will prompt for it on first run if you haven’t set
it.
Verify
Start OpenClaw and ask:
“Run print(2 + 2) in a Podflare sandbox.”
The agent should call the run_python tool exposed by the skill
and return 4. Run openclaw skills list to confirm podflare
is active.
What the agent gains
Eight tools from the Podflare MCP server athttps://mcp.podflare.ai:
| Tool | What it does |
|---|---|
create_sandbox | Provision a fresh Linux microVM. |
run_python | Execute Python. State persists across calls. |
run_bash | Execute Bash. Fresh subprocess per call. |
fork | Snapshot + spawn N copies of a sandbox (~80 ms). |
merge_into | Commit a child’s state onto the parent. |
upload | Write bytes into the sandbox (base64, ≤6 MB raw). |
download | Read bytes out of the sandbox. |
destroy_sandbox | Tear down. Always do this at the end of a session. |
Why run this in a sandbox?
OpenClaw’s defaultBash tool runs on your machine. One
prompt-injected URL, one compromised npm package, one overly
aggressive agent loop and that shell can read
~/.aws/credentials, .env files, or your loaded SSH keys.
The Podflare skill routes code execution through a disposable,
hardware-isolated Podflare Pod microVM. The sandbox has no access
to your host’s filesystem, env vars, cloud CLI, or SSH
agent. The worst any generated code can do is misbehave inside a
VM that’s getting destroyed anyway.
For the full threat model see
AI coding agent threat model
on the Podflare blog, and
Why Docker isn’t enough for LLM-generated code.
Session model
Each OpenClaw agent session that touches a sandbox tool creates a fresh Podflare sandbox on first use. The sandbox stays alive for the duration of the session — so imports, variables, and loaded data persist across subsequentrun_python calls. The
agent is expected to call destroy_sandbox when the task is
done; if it doesn’t, the VM is reaped by idle timeout (5
min on the free tier, up to 2 h on Scale).
Troubleshooting
PODFLARE_API_KEY not set— export the env var beforeopenclawstarts, or add it to the skill entry viaopenclaw config skills.entries.podflare.env.PODFLARE_API_KEY.401 Unauthorized— the key you minted may have been revoked. Visit dashboard.podflare.ai/keys and mint a new one.- The agent keeps using
Bashinstead ofrun_python— OpenClaw’s model is free to pick either tool. If you want to force sandbox-only execution, set a project-level rule instructing the agent to prefer the Podflare tools, or deny the default Bash entirely via your OpenClaw permission config.
Also on
- Smithery — MCP registry,
smithery mcp install podflare/sandbox - Claude Desktop — direct MCP config
- Claude Code / Cursor / Codex — direct MCP config for each
- MCP — the low-level protocol docs

