Install
openclaw skills install agent-native-designUse when designing, reviewing, or refactoring a CLI that must serve AI agents alongside humans, or when converting an API or SDK into an agent-usable CLI interface.
openclaw skills install agent-native-designThis skill helps analyze, design, and refactor command-line tools so they can reliably serve humans, AI agents, and orchestration systems at the same time.
It is not a skill for merely using a CLI. It is a skill for designing and reviewing a CLI as an agent-native interface.
The skill focuses on four goals:
Use this skill when the user wants to:
Typical prompts include:
Do not use this skill when the user only wants:
An agent-native CLI must simultaneously serve three audiences.
Needs: readable output, friendly error messages, onboarding guidance.
Channels: stderr, optional --format table, interactive TUI when appropriate.
Needs: structured data, stable contracts, self-description.
Channels: stdout as JSON, stable exit codes, schema introspection, dry-run previews, generated skills/docs.
Needs: delegated authentication, process management, deterministic error routing. Channels: environment variables, exit codes, dry-run mode, stable command semantics.
| Channel | Primary audience |
|---|---|
stdout | Machines and agents |
stderr | Humans |
exit codes | Systems and orchestrators |
This skill teaches how to make CLI a first-class interface for agents. Production agents (Claude Code, Cursor, Gemini CLI) often pair a CLI with an MCP server — CLI for state changes and local/scriptable work, MCP for multi-tenant SaaS and per-user auth. When a design needs the MCP side as well, see references/hybrid-mcp-cli.md for the decision matrix and the benchmark data behind the CLI/MCP tradeoff.
| Phase | Step | Description |
|---|---|---|
| 0. Bootstrap | 1 | Human/system obtains auth token or credentials |
| 0. Bootstrap | 2 | Set trusted env vars: token, profile, safety mode |
| 1. Discovery | 3 | Agent loads skills or command summaries |
| 1. Discovery | 4 | Agent queries schema/help for parameters |
| 2. Planning | 5 | Agent uses --dry-run to preview request shape |
| 3. Execution | 6 | Agent executes with validated inputs |
| 4. Interpretation | 7 | Agent parses structured result |
| 5. Recovery | 8 | Agent uses exit code + error object to retry, re-auth, repair, or escalate |
A CLI that does not support every phase is incomplete from the agent's perspective.
These are load-bearing. Each principle has at least one rubric criterion and at least one example backing it.
The CLI must serve human, agent, and system simultaneously. A design that serves only one audience is incomplete.
stdout should always be parseable and stable. Both success and failure are structured JSON. The CLI must decide for itself which audience is reading: detect at startup whether stdout is a TTY, default to JSON when it is not, default to human-readable when it is. NO_COLOR and an explicit --format json|table flag override the auto-detection. Agents should never have to remember to pass --format json — if they have to, they will forget, and the run will silently produce un-parseable prose. Envelope and error contract: references/design-patterns.md#output-envelopes.
CLI arguments are not inherently trusted — they may come from a hallucinating or prompt-injected agent. Environment-level configuration set by the human or system is more trusted. The agent chooses what to do within a bounded surface; the human defines where and how it is allowed to operate.
The CLI must be self-describing enough that an agent can use it without reading external README files. Self-description must be progressive, not eager: top-level --help lists resources; resource help lists actions; action help lists flags; a separate schema <resource.action> returns the full typed schema. A CLI with hundreds of commands that dumps everything into the first --help pays that token cost on every agent invocation. See references/design-patterns.md#help-design and examples.md Examples 2 and 5.
Read commands are easy to discover; mutating commands carry explicit warnings; destructive commands are hidden from skills or gated separately. Tier table and rationale: references/design-patterns.md#safety-design. Tiers are necessary but not sufficient — they are a prompt-side defense and approval fatigue degrades them quickly. Assume the agent runtime will additionally sandbox the CLI at the OS level (filesystem, network, processes), and design destructive commands to fail closed inside that sandbox.
Inputs are validated once at the CLI entry point. Internal code operates on validated, typed, trusted structures. Validation functions are centralized and tested for both pass and reject cases.
If a schema exists, everything derives from it: CLI command structure, validation rules, help text, generated docs, generated skills, type definitions, dry-run contracts. The schema is never manually duplicated. The schema must also carry its own version and deprecation signals, surfaced in the meta block of every response, so agents that have cached an older view can detect drift and re-discover rather than silently calling a removed method. Full versioning contract and example: references/design-patterns.md#schema-versioning.
Authentication is obtained and refreshed by human/system-managed flows. The agent uses credentials; it never owns the auth lifecycle. Preferred mechanisms: environment variables, config files, OS keychain integration, externally refreshed tokens. Canonical pattern: examples.md Example 3.
Throttle to one check per 24 hours per installation; never mutate the skill directory without explicit user consent.
If <this-skill-dir>/.last_update exists and is less than 24 hours old, skip this step entirely.
Otherwise, fetch the latest tag from upstream:
git -C <this-skill-dir> ls-remote --tags origin 'v*' 2>/dev/null \
| awk '{print $2}' | sed 's|refs/tags/||' \
| sort -V | tail -1
Compare with this skill's metadata.version from the frontmatter. If the upstream tag is strictly newer (semver), tell the user one line and ask:
"A newer version of this skill is available: vX.Y.Z → vA.B.C. Want me to
git pull?"
If they say yes, run git -C <this-skill-dir> pull --ff-only. Refresh .last_update either way so the prompt doesn't repeat for 24 hours.
If upstream is the same or older, refresh .last_update silently and continue.
On any failure (offline, not a git checkout — e.g. ClawHub-installed copy, read-only path, no permission), swallow the error silently and continue with the user's task. Do not mention the failure.
This step is the only place this skill ever touches its own files. It notifies; it does not pull without permission. The user owns the update lifecycle (Principle 7 applied to the skill itself).
Decide whether the user is providing: an existing CLI, an API to be wrapped, a conceptual design, a partial interface, or a failure case.
Human: Is there readable output? Are errors understandable? Is onboarding supported?
Agent: Is stdout stable JSON? Can the CLI describe itself? Is there schema introspection and dry-run?
System: Is auth delegatable? Are exit codes stable? Can failures be routed deterministically?
Check whether the CLI supports: bootstrap, discovery, parameter understanding, preview, execution, parsing, recovery.
Use the 14-criterion rubric to score the CLI. The full rubric lives in references/rubric.md. Every one of the seven principles has at least one rubric row backing it, so the score-to-principle mapping is total: P0 → Three-audience support, Non-interactive operation; P1 → Stdout contract, Stderr separation, Idempotent retries, Error recoverability; P2 → Trust boundary; P3 → Self-description (help), Dry-run; P4 → Safety tiers; P5 → Boundary validation; P6 → Schema introspection; P7 → Auth delegation. Then summarize per principle with evidence, risk, and recommendation. The full review checklists live in references/checklists.md.
State whether the CLI is agent-native, partially agent-native, or not yet agent-native.
Assess support for human, agent, system.
Assess each phase: auth bootstrap → env setup → skill/help discovery → schema introspection → dry-run → execution → parsing and recovery.
Report the 14-criterion rubric score first, then summarize the seven principles as: status · evidence · issue · recommendation.
Summarize design failures: human-only output, unstable JSON, no schema introspection, destructive commands overexposed, auth coupled to agent, ambiguous exit codes.
Prioritized recommendations with examples drawn from references/examples.md.
--yes escape and no TTY-aware fallback--help — agents that call the CLI in loops pay this cost on every invocation; use progressive disclosure instead. The token-cost rationale lives in references/hybrid-mcp-cli.md.Load on demand — these are not in the agent's context until needed:
| File | Read when |
|---|---|
references/examples.md | Showing the user a good envelope, error, dry-run, batch response, or anti-pattern |
references/rubric.md | Producing the score component of a CLI review |
references/checklists.md | Walking through a CLI auditing list with the user, or sanity-checking a new design |
references/design-patterns.md | Writing the contract for envelopes, exit codes, idempotency, non-interactive mode, long-running commands, schema versioning, locale/time |
references/hybrid-mcp-cli.md | Deciding CLI vs. MCP vs. both, or citing benchmark numbers behind the CLI efficiency claim |
references/testing.md | Showing the user how to verify their CLI actually upholds the contract (envelope shape, idempotency replay, TTY behavior, schema drift, dry-run safety, locale determinism) — load this when the design review converges on "how do we keep it agent-native over time?" |
references/citations.md | Citing the primary sources behind a recommendation |
This skill helps turn a CLI into a trustworthy execution interface for humans, AI agents, and systems through structured output, self-description, delegated authentication, safety boundaries, and a complete interaction loop.