Install
openclaw skills install skillcraftDesign and build OpenClaw skills. Use when asked to "make/build/craft a skill", extract ad-hoc functionality into a skill, or package scripts/instructions for reuse. Covers OpenClaw-specific integration (tool calling, memory, message routing, cron, canvas, nodes) and ClawHub publishing.
openclaw skills install skillcraftAn opinionated guide for creating OpenClaw skills. Focuses on OpenClaw-specific integration — message routing, cron scheduling, memory persistence, channel formatting, frontmatter gating — not generic programming advice.
Docs: https://docs.openclaw.ai/tools/skills · https://docs.openclaw.ai/tools/creating-skills
This skill is written for frontier-class models (Opus, Sonnet). If you're running a cheaper model and find a stage underspecified, expand it yourself — the design sequence is a scaffold, not a script. Cheaper models should:
{baseDir}/patterns/ more carefully before architectingSkip if building from scratch. Use when packaging existing functionality (scripts, TOOLS.md sections, conversation patterns, repeated instructions) into a skill.
Gather what exists, where it lives, what works, what's fragile. Then proceed to Stage 1.
Work through with the user:
Ask early: Is this for your setup, or should it work on any OpenClaw instance?
| Choice | Implications |
|---|---|
| Universal | Generic paths, no local assumptions, ClawHub-ready |
| Particular | Can reference local skills, tools, workspace config |
Scan <available_skills> from the system prompt for complementary capabilities. Read promising skills to understand composition opportunities.
Review the docs with the skill's needs in mind. Think compositionally — OpenClaw's primitives combine in powerful ways. Key docs to check:
| Need | Doc |
|---|---|
| Messages | /concepts/messages |
| Cron/scheduling | /automation/cron-jobs |
| Subagents | /tools/subagents |
| Browser | /tools/browser |
| Canvas UI | /tools/ (canvas) |
| Node devices | /nodes/ |
| Slash commands | /tools/slash-commands |
See {baseDir}/patterns/composable-examples.md for inspiration on combining these.
Based on Stages 1–2, identify which patterns apply:
| If the skill... | Pattern |
|---|---|
| Wraps a CLI tool | {baseDir}/patterns/cli-wrapper.md |
| Wraps a web API | {baseDir}/patterns/api-wrapper.md |
| Monitors and notifies | {baseDir}/patterns/monitor.md |
Load all that apply and synthesise. Most skills combine patterns.
Script vs. instructions split: Scripts handle deterministic mechanics (API calls, data gathering, file processing). SKILL.md instructions handle judgment (interpreting results, choosing approaches, composing output). The boundary is: could a less intelligent system do this reliably? If yes → script.
Present proposed architecture for user review:
State locations:
<workspace>/memory/ — user-facing context{baseDir}/state.json — skill-internal state (travels with skill)<workspace>/state/<skill>.json — skill state in common workspace areaIf extracting: include migration notes (what moves, what workspace files need updating).
Validate: Does it handle all Stage 1 examples? Any contradictions? Edge cases?
Iterate until the user is satisfied. This is where design problems surface cheaply.
Default: same-session. Work through the spec with user review at each step. Reserve subagent handoff for complex script subcomponents only — SKILL.md and integration logic stay in the main session.
If extracting: update workspace files, clean up old locations, verify standalone operation.
The frontmatter determines discoverability and gating. Format follows the AgentSkills spec with OpenClaw extensions.
---
name: my-skill
description: [description optimised for discovery — see below]
homepage: https://github.com/user/repo # optional
metadata: {"openclaw":{"emoji":"🔧","requires":{"bins":["tool"],"env":["API_KEY"]},"primaryEnv":"API_KEY","install":[...]}}
---
Critical: metadata must be a single-line JSON object (parser limitation).
The description determines whether the skill gets loaded. Include:
Test: would the agent select this skill for each of your Stage 1 example phrases?
| Key | Purpose |
|---|---|
name | Skill identifier (required) |
description | Discovery text (required) |
homepage | URL for docs/repo |
user-invocable | true/false — expose as slash command (default: true) |
disable-model-invocation | true/false — exclude from model prompt (default: false) |
command-dispatch | tool — bypass model, dispatch directly to a tool |
command-tool | Tool name for direct dispatch |
command-arg-mode | raw — forward raw args to tool |
OpenClaw filters skills at load time using metadata.openclaw:
| Field | Effect |
|---|---|
always: true | Skip all gates, always load |
emoji | Display in macOS Skills UI |
os | Platform filter (darwin, linux, win32) |
requires.bins | All must exist on PATH |
requires.anyBins | At least one must exist |
requires.env | Env var must exist or be in config |
requires.config | Config paths must be truthy |
primaryEnv | Maps to skills.entries.<name>.apiKey |
install | Installer specs for auto-setup (brew/node/go/uv/download) |
Sandbox note: requires.bins checks the host at load time. If sandboxed, the binary must also exist inside the container.
Each eligible skill adds ~97 chars + name + description + location path to the system prompt. Keep descriptions informative but not bloated — every character costs tokens on every turn.
"install": [
{"id": "brew", "kind": "brew", "formula": "tap/tool", "bins": ["tool"], "label": "Install via brew"},
{"id": "npm", "kind": "node", "package": "tool", "bins": ["tool"]},
{"id": "uv", "kind": "uv", "package": "tool", "bins": ["tool"]},
{"id": "go", "kind": "go", "package": "github.com/user/tool@latest", "bins": ["tool"]},
{"id": "dl", "kind": "download", "url": "https://...", "archive": "tar.gz"}
]
| Token | Meaning |
|---|---|
{baseDir} | This skill's directory (OpenClaw resolves at runtime) |
<workspace>/ | Agent's workspace root |
{baseDir} for skill-internal references (scripts, state, patterns)<workspace>/ for workspace files (TOOLS.md, memory/, etc.){baseDir}/patterns/ (cli-wrapper, api-wrapper, monitor, composable-examples)