Skill Evaluator
v1.0.0Evaluate Clawdbot skills for quality, reliability, and publish-readiness using a multi-framework rubric (ISO 25010, OpenSSF, Shneiderman, agent-specific heuristics). Use when asked to review, audit, evaluate, score, or assess a skill before publishing, or when checking skill quality. Runs automated structural checks and guides manual assessment across 25 criteria.
⭐ 3· 2.4k·9 current·10 all-time
by@terwox
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description match the delivered artifacts: SKILL.md describes running scripts/evaluations and the repo contains scripts/eval-skill.py, a rubric (references/rubric.md), and an evaluation template. The checks the script implements (frontmatter, file structure, docs, simple script analysis) are coherent with the stated evaluator purpose.
Instruction Scope
SKILL.md explicitly instructs the agent to run the local script (python3 scripts/eval-skill.py /path/to/skill) and to read/skim code and docs — this necessarily requires reading files in the target skill directory, which is intended. Manual scoring steps are required and the evaluator recommends an optional external scanner (SkillLens) — that recommendation is optional and not required for operation.
Install Mechanism
No install spec is provided (instruction-only skill). The included Python script requires Python 3.6+ and PyYAML (documented in SKILL.md). No network downloads, external archives, or package installs are required by the skill itself.
Credentials
The skill requests no environment variables, no credentials, and no config paths. The evaluator script scans files for issues (including credential-like patterns) when run, which is appropriate for its purpose but means you should not run it against directories containing secrets you don't want inspected.
Persistence & Privilege
always:false and user-invocable:true. The skill does not request persistent agent presence or attempt to modify other skills or system-wide settings. It performs local read-only analysis of a provided skill directory (writes only when you copy the EVAL_TEMPLATE to create EVAL.md, which is an intended publishing artifact).
Scan Findings in Context
[pre-scan-injection-signals] expected: No pre-scan/injection signals were detected. This is expected for a local, instruction-driven evaluator that contains no network or downloader behavior.
Assessment
This skill appears internally consistent and appropriate for reviewing other skills. Before running: 1) Inspect scripts/eval-skill.py yourself (it only reads files and parses YAML/AST; it does not spawn subprocesses or network calls). 2) Ensure you run it on the intended skill directory (don't point it at system or private repos containing secrets). 3) Install Python 3.6+ and PyYAML (pip install pyyaml) if you plan to run the automated checks. 4) Remember the automated script covers only structural/heuristic checks — manual judgment is required for many rubric items. 5) SKILL.md recommends an optional external tool (SkillLens via npm) — that is not required by this skill; treat external tool recommendations as separate dependencies and review them before use.Like a lobster shell, security has layers — review code before you run it.
latestvk972pjdst0p8pw0eqhbk13rg8x8095vk
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
