Install
openclaw skills install skill-netAnalyze OpenClaw skill ecosystem — dependencies, orphan detection, ecosystem health score, impact analysis, and skill relationships. Use when the user asks about skill relationships, "what depends on X", "if I delete Y what breaks", ecosystem health, or wants to find skills without trigger conditions (orhpans).
openclaw skills install skill-netAnalyze, map, and diagnose the OpenClaw skill ecosystem — not a skill creator, a diagnostic lens.
This skill answers: how does my skill ecosystem actually work?
It scans every SKILL.md, detects dependency relationships, scores ecosystem health, and finds orhpans.
Run the complete ecosystem scan and produce a full diagnostic report.
Trigger: "analyze ecosystem", "full scan", "ecosystem health", "skill health", "技能生态", "生态报告"
Language options (CLI flags):
python3 scripts/analyze_deps.py # default: ZH then EN
python3 scripts/analyze_deps.py --lang=ZH # Chinese only
python3 scripts/analyze_deps.py --lang=EN # English only
python3 scripts/analyze_deps.py --lang=BOTH # ZH then EN (default)
Output sections:
All sections rendered in the requested language (ZH/EN) with full bilingual labels.
Answer specific questions from cached or fresh data.
Trigger: "what depends on X", "if I delete Y", "who references Z", "core skills", "most connected skill"
Execution: Answer from data/ecosystem.json or run fresh scan.
Find all skills with SKILL.md but missing trigger conditions.
Trigger: "find orphans", "skills without triggers", "dead skills", "missing triggers"
Output: List of orphan skills with line count and frontmatter name.
Compare two skills side-by-side.
Trigger: "compare X and Y", "X vs Y dependencies", "skill X relationship to Y"
Output: Shared mentions, relationship type, overlap analysis.
The ecosystem reveals structural patterns invisible from casual observation:
| Finding | Evidence |
|---|---|
True core hub: review | 53 skills reference /review — by far the most connected |
qa is a secondary hub | 9 skills reference /qa |
/summarize, /weather | Referenced by 2+ skills each — utility anchors |
| 100/123 skills lack triggers | Many use /protocol style instead of "use when" |
| Ecosystem Health: 22.6/100 | Most skills missing metadata and trigger conditions |
review and qa are invisible hubs | They don't use skill- prefix — protocol commands |
~/.openclaw/skills/ and ~/.openclaw/workspace/skills/, read every SKILL.mduse when / trigger / /protocol)mentions (outgoing) + referenced_by (incoming) for each skilldata/ecosystem.json + data/report.mdHealth Score = (
trigger_coverage × 30% +
metadata_complete × 20% +
cross_reference × 20% +
ecosystem_cohesion × 30%
)
Your ecosystem: 22.6/100 — healthy room for improvement.
Surprising insight: The most-connected nodes are protocol commands (/review, /qa), not skill-* named skills. These protocol skills are referenced by code patterns like:
# Many skills open with this:
# /review — Structured Code Review Protocol
# /qa — Quality Assurance Execution Protocol
This means traditional dependency detection (looking for skill-X mentions) severely underestimates real relationships.
True dependency types:
skill-factory, gupiao, bazi/review, /qa, /careful, /csoclawhub, mmx, summarize, weather.git/, .venv/, __pycache__/ in scansThe output must:
~/.openclaw/skills/ and ~/.openclaw/workspace/skills/data/ecosystem.json + data/report.md--lang flagGood:
"🔵 review (Core Hub, 53 incoming): /review is referenced by 53 skills. If deleted, these skills lose their review protocol: gupiao, proactive-agent, skill-vetter..."
Bad:
"Here are all skills listed alphabetically"
Good Orphan Report:
"⚠️ Found 100 orhpans — most are protocol-style skills (review, qa, careful, cso) that use
/commandactivation instead of 'use when' phrases. These are not broken, just designed differently."
Good Query:
"DELETE review → Breaks 53 skills including: gupiao, marketing-, engineering-, testing-, project-. This is the most critical skill in the ecosystem."