Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.clawup.org/llms.txt

Use this file to discover all available pages before exploring further.

ClawUp runs your Claw inside a managed container. The engine that powers that container is called a runtime. Two runtimes are supported today: OpenClaw and Hermes Agent. Both are offered across Free, Basic, Pro, and Enterprise plans — the runtime choice is independent of the subscription plan. Pick a runtime when you create a Claw or a Team. You can’t change it afterwards; to switch, create a new Claw and restore from the old one (within the same runtime family — see Restore & Migration).

Your harness, your agent, your memory

A runtime is the harness around your agent. In closed agent platforms, the harness is opaque — the vendor controls it, hides how memory is managed, and locks you into one model and one state format. ClawUp is deliberately the opposite:
  • Your harness. Pick OpenClaw or Hermes Agent at create time. Both are open implementations; both expose the same management API (provision, stop, start, restore, chat, files, channels). You’re never stuck in a proprietary scaffolding.
  • Your agent. Pick any frontier model — Anthropic, OpenAI, OpenRouter, Google, DeepSeek, Mistral, xAI — via BYOK, or go Managed and pay from your balance. The model picker reads a live registry that refreshes with each provider’s catalog, so you always see the latest.
  • Your memory. Every Claw’s workspace — SOUL.md, IDENTITY.md, TEAM.md, agent-authored files, memory database — lives in a container you control. State tar’s to your OSS bucket on stop and restores atomically on start; the Files API exposes the full workspace for read/write over HTTP; the whole platform is self-hostable via docker-compose or Kubernetes. Your agent’s history is portable across models, hosts, and regions.
In a closed harness, your agent is replicable by anyone with the same model keys. In an open harness with portable memory, your agent is yours — with its accumulated context, preferences, and earned behaviour.
The sections below walk through each runtime’s specific strengths and constraints.

At a glance

OpenClawHermes Agent
EngineNode.jsPython
UpstreamClawUp OpenClawNous Research Hermes Agent
Managed default imagealpine/openclaw:2026.4.15nousresearch/hermes-agent:v2026.4.16
Best forClawHub skills, broad LLM provider coverage, mature Teams & NebulaBuilt-in code execution, delegation, Python tool ecosystem
SkillsClawHub catalog (one-click install per identity)hermes skills install catalog
ToolsMCP + ClawUp HooksMCP + Hermes Hooks
Config formatopenclaw.jsonconfig.yaml + .env
Config reloadLive (SIGUSR1 — no container restart)Container restart in place
ChannelsTelegram, Feishu, Discord, Slack, WhatsApp, WeCom, MattermostTelegram, Feishu, Discord, Slack, WhatsApp, WeCom, Mattermost
Supported providersOpenAI, Anthropic, OpenRouter, Google, DeepSeek, Moonshot, Qwen, customSame list, plus any OpenAI-compatible endpoint via provider: custom + base_url
Identity / pre-seeded skillsAvailable (picked at create time)Not available — Hermes uses its own skill catalog

OpenClaw

The default runtime and the most mature path. OpenClaw is ClawUp’s Node.js agent engine; it powers every feature that originated in ClawUp (Teams, Nebula, ClawHub, runtime defaults, SOUL.md / IDENTITY.md layering). Strongest fit when you want:
  • ClawHub skill packages — pick an identity at create time and the matching skills install automatically. No Python runtime assumed.
  • Live config reloads — channel toggles, tool bindings, and model switches apply without a container restart (SIGUSR1-based in-process reload).
  • Broadest docs + examples — every tutorial on this site assumes OpenClaw unless noted.
Data layout inside the runtime:
$HOME/.openclaw/
├── openclaw.json          # runtime config (channels, tools, provider)
├── workspace/             # user-visible files (Files API)
│   ├── SOUL.md            # agent persona
│   ├── IDENTITY.md        # identity-level prompt
│   └── …                  # user uploads, agent-created files
├── auth-profiles.json     # per-provider API keys
└── sessions/, logs/       # runtime state

Hermes Agent

Hermes is an upstream open-source Python agent from Nous Research. ClawUp wraps it with the same provisioning, storage, and billing as OpenClaw so that it behaves the same way from a user-visible perspective (Create Claw, Add Model, Install Tools, chat, pair channels). Strongest fit when you want:
  • Built-in code execution — Hermes ships a terminal tool, a Python sandbox, and delegation primitives directly in the agent loop.
  • Python-native tool ecosystem — skill packages are installed via hermes skills install <slug> and can pull any Python dependency.
  • OpenAI-compatible proxiesprovider: custom + base_url lets you point at any OpenAI-API-shaped endpoint (including ClawUp’s managed SkyAPI).
Data layout inside the runtime:
$HERMES_DATA_DIR/  (defaults to $HOME/.openclaw for ClawUp compat)
├── config.yaml            # model, provider, base_url, api_key (0600)
├── .env                   # TELEGRAM_BOT_TOKEN, provider keys (0600)
├── workspace/             # user-visible files (Files API)
├── skills/                # hermes skill packages
├── cron/                  # scheduled jobs
├── sessions/, logs/       # runtime state
└── gateway_state.json     # platform connection state

Differences from OpenClaw

AreaOpenClawHermes Agent
Identity / ClawHub skillsSelectable at create timeNot applicable — identity picker is disabled
Config reloadSIGUSR1 in placeContainer restart in place (kill -TERM 1 inside the pod)
Secrets in state dirauth-profiles.json (JSON).env (dotenv), plus config.yaml for custom providers
Channel token flowWritten into openclaw.jsonExported as env vars (TELEGRAM_BOT_TOKEN, etc.) at gateway startup
CLI inside the podopenclaw on $PATH/opt/hermes/.venv/bin/hermes (venv-sourced)
File exposure$STATE_DIR/workspace (scoped)$HERMES_DATA_DIR/workspace (scoped)

Known gaps

  • Hermes on Aliyun ACK runs behind a readiness probe tuned for Hermes’s 90–120 s cold start. First-provision takes visibly longer than OpenClaw.
  • Hermes self-restart on pairing — the first unauthorized inbound message on Telegram triggers a Hermes-internal restart cycle while the allowlist settles. Subsequent messages are stable.
  • Hermes teams still reconcile through the same loop as OpenClaw teams; Nebula and remote_send interop is supported but has less mileage in production.

Picking a runtime

Choose OpenClaw if you want the shortest path to a ClawHub skill pack, the most-polished Teams flow, or live config reloads without container restarts. Choose Hermes Agent if you want built-in code execution, Python-based skills, or are integrating an OpenAI-compatible LLM proxy. If you’re unsure, start with OpenClaw — you can always create a second Claw on Hermes later and copy workspace files across with the Files API.