Install
openclaw skills install autoskillIntelligent skill router. Analyzes the current problem statement and context, scores all available skills for applicability, and recommends the most relevant ones in priority order. **No skill is ever invoked without your explicit approval.** Use when you want Claude to automatically identify and recommend the right skills without manually choosing them. Great for complex tasks where the right set of skills is non-obvious. Invocation: /autoskill [problem description] /autoskill (uses current conversation context if no args given) Examples: /autoskill fix the login bug that crashes on empty password /autoskill add unit tests for the payment module /autoskill review my PR before I merge /autoskill (running with no args analyzes the current conversation)
openclaw skills install autoskillThe commands below inspect local git state and detect the project language from config files. They do not modify anything, send data externally, or run project code. They are safe to run in any local workspace.
_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "no-git")
echo "BRANCH: $_BRANCH"
_LANG_SIGNALS=""
[ -f package.json ] && _LANG_SIGNALS="$_LANG_SIGNALS typescript,javascript"
[ -f requirements.txt ] || [ -f pyproject.toml ] || [ -f setup.py ] && _LANG_SIGNALS="$_LANG_SIGNALS python"
[ -f Cargo.toml ] && _LANG_SIGNALS="$_LANG_SIGNALS rust"
[ -f go.mod ] && _LANG_SIGNALS="$_LANG_SIGNALS go"
[ -f pom.xml ] || [ -f build.gradle ] && _LANG_SIGNALS="$_LANG_SIGNALS java"
[ -f pubspec.yaml ] && _LANG_SIGNALS="$_LANG_SIGNALS dart,flutter"
ls *.csproj 2>/dev/null | head -1 | grep -q . && _LANG_SIGNALS="$_LANG_SIGNALS csharp"
echo "LANG_SIGNALS:${_LANG_SIGNALS:-unknown}"
_GIT_CHANGES=$(git status --short 2>/dev/null | head -20 || echo "")
echo "GIT_CHANGES: $(echo "$_GIT_CHANGES" | wc -l | tr -d ' ') files"
echo "CHANGED_EXTS: $(echo "$_GIT_CHANGES" | grep -oE '\.[a-zA-Z]+$' | sort -u | tr '\n' ',' 2>/dev/null || echo 'none')"
The following skills perform irreversible, externally-visible, or broadly-scoped actions. They are always treated as SUGGEST-tier — they will never be recommended for automatic inclusion and always require individual user confirmation before running.
HIGH_RISK_SKILLS = [
# Deployment / release / CI
ship, land-and-deploy, canary, deploy, setup-deploy, prp-pr, deployment-patterns,
# Payment / billing / money
customer-billing-ops, finance-billing-ops, agent-payment-x402,
# Database mutations
database-migrations,
# External communications / public posts
github-ops, x-api, email-ops, messages-ops, unified-notifications-ops,
crosspost, content-writer,
# Account / enterprise / credential operations
enterprise-agent-ops, investor-outreach, cso, security-bounty-hunter,
# Broad shell / file system access
careful, guard, safety-guard,
]
Additional heuristic: Any skill whose description contains keywords such as "deploy", "payment", "billing", "money", "purchase", "credential", "account", "external message", "post to", "send email", "database migration", "DROP TABLE", "rm -rf", "force-push", or "broad shell" is also treated as high-risk even if it is not in the fixed list above.
When scoring (Phase 3), check each candidate against this registry and the
heuristic. If it matches, force its tier to SUGGEST and add a [HIGH-RISK]
label in the scoring table, regardless of its numeric score.
Goal: Build a structured context profile from the arguments and project state.
Input: $ARGUMENTS — the user's problem description. If empty, synthesize from the current conversation: look at the most recent user messages, any error output, open files, or recent tool calls visible in context.
Build the context profile by answering these questions:
create — building something new (feature, file, component, test)fix — repairing broken behavior (bug, error, crash, regression)review — evaluating quality (code review, security audit, PR check)deploy — shipping or releasing (push, merge, publish, CI)document — writing or updating docsrefactor — improving structure without changing behaviortest — adding or improving test coverageanalyze — understanding or investigating somethingdesign — UI/UX or architecture planningoptimize — improving performanceLANG_SIGNALS and CHANGED_EXTSfrontend, backend, database, security, testing, deployment, performance, documentation, architecture, mobile, api, infrastructure, dataPrint the context profile in this format before proceeding:
CONTEXT PROFILE
───────────────
Problem: [1-2 sentences]
Intent: [action intent]
Stack: [languages/frameworks]
Domains: [comma-separated domain tags]
Keywords: [comma-separated keywords]
Goal: Build a candidate list from the available skills.
The full skill list is already loaded in your context (from the system-reminder's "The following skills are available" section). You do NOT need to read files — use the in-context list directly.
Steps:
From the system-reminder skill list, extract every skill's name and description.
Group skills by domain bucket:
| Bucket | Skill name patterns to look for |
|---|---|
testing | tdd, test, pytest, jest, coverage, e2e, playwright, spec |
security | security, auth, vulnerability, owasp, bounty, pentest |
code-quality | review, lint, simplify, refactor, clean, style, standards |
deployment | ship, deploy, land, canary, pm2, docker, ci, cd |
frontend | frontend, ui, design, figma, css, react, vue, html, animation |
backend | backend, api, rest, graphql, server, express, fastapi, spring |
database | database, sql, postgres, clickhouse, migration, schema |
documentation | docs, readme, update-docs, codemaps, openapi |
planning | plan, autoplan, blueprint, office-hours, architect, prp |
performance | performance, optimize, bundle, lighthouse, profil |
infrastructure | kubernetes, terraform, aws, cloud, gstack, mcp |
mobile | flutter, android, ios, kotlin, swift, react-native |
meta | checkpoint, learn, memory, session, instinct, hookify |
Domain tags.Print the candidate count: Found N candidate skills in relevant buckets.
Goal: Score every candidate skill and decide what to recommend, suggest, or skip.
For each candidate skill, score 0–100 using this rubric:
| Criterion | Weight | How to evaluate |
|---|---|---|
| Intent match | 35% | Does the skill's purpose directly match the context profile's action intent? Exact match = 35, close match = 20, weak match = 10, no match = 0 |
| Domain match | 30% | How many of the context profile's domain tags appear in this skill's description or bucket? Each match adds ~10 points up to 30 |
| Keyword overlap | 20% | How many of the context profile's keywords appear (roughly) in the skill name or description? Each match adds ~4 points up to 20 |
| Stack match | 15% | Does the skill explicitly target the detected language/framework? Match = 15, stack-agnostic = 10, mismatch = 0 |
Thresholds:
High-risk override: If a skill appears in the HIGH_RISK_SKILLS registry or matches
the heuristic above, force it to SUGGEST tier and mark it [HIGH-RISK] in the table,
regardless of its numeric score.
Constraint: max 5 RECOMMENDED skills per invocation. If more than 5 score ≥70, take the top 5 by score.
Print the scoring table (show only skills scoring ≥ 30):
SKILL SCORING
─────────────────────────────────────────────────────────────────
Skill Score Tier Reason
─────────────────────── ───── ────────────── ─────────────────────
tdd-workflow 88 RECOMMENDED intent=create, domain=testing, keyword=test
security-review 82 RECOMMENDED intent=fix, domain=security, keyword=auth
typescript-reviewer 75 RECOMMENDED stack=typescript, domain=code-quality
code-review 72 RECOMMENDED intent=review match
database-reviewer 55 SUGGEST domain=database, weak intent match
ship 71 SUGGEST [HIGH-RISK] score≥70 but forced to SUGGEST — deployment skill
seo 8 SKIP no frontend/content signals
─────────────────────────────────────────────────────────────────
RECOMMENDED: N skills | SUGGEST: M skills (K high-risk) | SKIP: K skills
This phase ALWAYS runs before any skill is invoked. There are no exceptions. Even a single RECOMMENDED skill requires explicit user confirmation.
Print the full proposed run as a preview. Do not invoke anything yet:
EXECUTION PLAN
──────────────────────────────────────────────────
# Skill Tier Score Why
── ─────────────────── ────────── ───── ─────────────────────────
1 security-review RECOMMENDED 88 fix intent + security domain
2 investigate RECOMMENDED 82 fix intent + keyword=crash
3 typescript-reviewer RECOMMENDED 75 stack=typescript
4 code-review RECOMMENDED 72 review intent match
5 ship HIGH-RISK 71 deployment — requires confirmation
──────────────────────────────────────────────────
Use AskUserQuestion for EVERY run. Do not skip this step.
Format the question as follows:
autoskill recommends [N] skills for: "[problem statement]"
These skills will run only after you confirm below:
Recommended (score ≥70):
skill-name— [reason it applies]skill-name— [reason it applies]Also applicable — want any of these?
skill-name[SUGGEST] — [reason it might apply]skill-name[HIGH-RISK] — [why it needs confirmation]Actions:
- Type the names of any suggested/high-risk skills you want to add
- Type "none" to run only the recommended skills
- Type "cancel" to stop and do nothing
If the user replies "cancel" or selects no skills and there are no recommended skills, abort and report BLOCKED.
Add any user-selected skills to the execution queue before continuing.
Goal: Apply each queued skill in order.
Execution order:
For each skill in the queue:
→ Applying \[skill-name]` (score: [N]) — [one-line reason]`Skill(skill="[skill-name]", args="[relevant portion of the original problem statement]")If a skill returns BLOCKED or NEEDS_CONTEXT: note it in the audit table and continue to the next skill. Do not abort the entire queue for one blocked skill.
If NO skills score ≥40:
Do not silently do nothing. Instead use AskUserQuestion:
No skills scored above the applicability threshold for: "[problem statement]"
This usually means the request is best handled directly (not via a specialized skill), or the problem description needs more context.
Options: A) Let me handle this directly without a skill B) Tell me more about what you need (I'll re-score) C) Show me all available skills so I can pick manually
After all skills have run (or been skipped), print the final report:
## autoskill Run Complete
**Problem:** [problem statement]
**Intent:** [action intent] | **Stack:** [stack] | **Domains:** [domains]
| Skill | Score | Tier | Applied | Outcome | Reason |
|-------|-------|------|---------|---------|--------|
| tdd-workflow | 88 | RECOMMENDED | ✅ | completed | create intent + testing domain |
| security-review | 82 | RECOMMENDED | ✅ | completed | fix intent + security domain |
| ship | 71 | HIGH-RISK | ⏸ User | skipped | user declined |
| database-reviewer | 55 | SUGGEST | ⏸ User | completed | user approved |
| seo | 8 | SKIP | ❌ | — | no frontend signals |
**Summary:** [N] skills applied, [M] skipped, [K] blocked.
Report final status as one of:
[HIGH-RISK].autoskill is a meta-skill that recommends and routes to other skills. It does not
perform any file modifications, deployments, or external communications itself.
However, the skills it recommends may do so. Because of this: