ClawUp runs your Claw inside a managed container. The engine that powers that container is called a runtime. Two runtimes are supported today: OpenClaw and Hermes Agent. Both are offered across Free, Basic, Pro, and Enterprise plans — the runtime choice is independent of the subscription plan. Pick a runtime when you create a Claw or a Team. You can’t change it afterwards; to switch, create a new Claw and restore from the old one (within the same runtime family — see Restore & Migration).Documentation Index
Fetch the complete documentation index at: https://docs.clawup.org/llms.txt
Use this file to discover all available pages before exploring further.
Your harness, your agent, your memory
A runtime is the harness around your agent. In closed agent platforms, the harness is opaque — the vendor controls it, hides how memory is managed, and locks you into one model and one state format. ClawUp is deliberately the opposite:- Your harness. Pick OpenClaw or Hermes Agent at create time. Both are open implementations; both expose the same management API (provision, stop, start, restore, chat, files, channels). You’re never stuck in a proprietary scaffolding.
- Your agent. Pick any frontier model — Anthropic, OpenAI, OpenRouter, Google, DeepSeek, Mistral, xAI — via BYOK, or go Managed and pay from your balance. The model picker reads a live registry that refreshes with each provider’s catalog, so you always see the latest.
- Your memory. Every Claw’s workspace —
SOUL.md,IDENTITY.md,TEAM.md, agent-authored files, memory database — lives in a container you control. State tar’s to your OSS bucket on stop and restores atomically on start; the Files API exposes the full workspace for read/write over HTTP; the whole platform is self-hostable viadocker-composeor Kubernetes. Your agent’s history is portable across models, hosts, and regions.
In a closed harness, your agent is replicable by anyone with the same model keys. In an open harness with portable memory, your agent is yours — with its accumulated context, preferences, and earned behaviour.The sections below walk through each runtime’s specific strengths and constraints.
At a glance
| OpenClaw | Hermes Agent | |
|---|---|---|
| Engine | Node.js | Python |
| Upstream | ClawUp OpenClaw | Nous Research Hermes Agent |
| Managed default image | alpine/openclaw:2026.4.15 | nousresearch/hermes-agent:v2026.4.16 |
| Best for | ClawHub skills, broad LLM provider coverage, mature Teams & Nebula | Built-in code execution, delegation, Python tool ecosystem |
| Skills | ClawHub catalog (one-click install per identity) | hermes skills install catalog |
| Tools | MCP + ClawUp Hooks | MCP + Hermes Hooks |
| Config format | openclaw.json | config.yaml + .env |
| Config reload | Live (SIGUSR1 — no container restart) | Container restart in place |
| Channels | Telegram, Feishu, Discord, Slack, WhatsApp, WeCom, Mattermost | Telegram, Feishu, Discord, Slack, WhatsApp, WeCom, Mattermost |
| Supported providers | OpenAI, Anthropic, OpenRouter, Google, DeepSeek, Moonshot, Qwen, custom | Same list, plus any OpenAI-compatible endpoint via provider: custom + base_url |
| Identity / pre-seeded skills | Available (picked at create time) | Not available — Hermes uses its own skill catalog |
OpenClaw
The default runtime and the most mature path. OpenClaw is ClawUp’s Node.js agent engine; it powers every feature that originated in ClawUp (Teams, Nebula, ClawHub, runtime defaults, SOUL.md / IDENTITY.md layering). Strongest fit when you want:- ClawHub skill packages — pick an identity at create time and the matching skills install automatically. No Python runtime assumed.
- Live config reloads — channel toggles, tool bindings, and model switches apply without a container restart (SIGUSR1-based in-process reload).
- Broadest docs + examples — every tutorial on this site assumes OpenClaw unless noted.
Hermes Agent
Hermes is an upstream open-source Python agent from Nous Research. ClawUp wraps it with the same provisioning, storage, and billing as OpenClaw so that it behaves the same way from a user-visible perspective (Create Claw, Add Model, Install Tools, chat, pair channels). Strongest fit when you want:- Built-in code execution — Hermes ships a terminal tool, a Python sandbox, and delegation primitives directly in the agent loop.
- Python-native tool ecosystem — skill packages are installed via
hermes skills install <slug>and can pull any Python dependency. - OpenAI-compatible proxies —
provider: custom+base_urllets you point at any OpenAI-API-shaped endpoint (including ClawUp’s managed SkyAPI).
Differences from OpenClaw
| Area | OpenClaw | Hermes Agent |
|---|---|---|
| Identity / ClawHub skills | Selectable at create time | Not applicable — identity picker is disabled |
| Config reload | SIGUSR1 in place | Container restart in place (kill -TERM 1 inside the pod) |
| Secrets in state dir | auth-profiles.json (JSON) | .env (dotenv), plus config.yaml for custom providers |
| Channel token flow | Written into openclaw.json | Exported as env vars (TELEGRAM_BOT_TOKEN, etc.) at gateway startup |
| CLI inside the pod | openclaw on $PATH | /opt/hermes/.venv/bin/hermes (venv-sourced) |
| File exposure | $STATE_DIR/workspace (scoped) | $HERMES_DATA_DIR/workspace (scoped) |
Known gaps
- Hermes on Aliyun ACK runs behind a readiness probe tuned for Hermes’s 90–120 s cold start. First-provision takes visibly longer than OpenClaw.
- Hermes self-restart on pairing — the first unauthorized inbound message on Telegram triggers a Hermes-internal restart cycle while the allowlist settles. Subsequent messages are stable.
- Hermes teams still reconcile through the same loop as OpenClaw teams; Nebula and
remote_sendinterop is supported but has less mileage in production.