Skill Test
v0.1.1Evaluate and QA a skill before release on ClawHub, skills.sh, and similar directories. Includes the bundled static evaluator `scripts/eval_skill.py` plus gui...
⭐ 0· 88·1 current·1 all-time
byWeiwei Fan@fwwdn
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name, description, SKILL.md instructions, and the bundled scripts/evidence files align: the skill is a skill-evaluator and bundles scripts/eval_skill.py to perform static checks. Required binary is only python3 and there are no unrelated environment variables or config paths.
Instruction Scope
SKILL.md explicitly instructs the assistant to read the target SKILL.md and run python3 scripts/eval_skill.py against a target skill and to prefer isolation. Those instructions stay within the stated purpose. The bundled evaluator performs filesystem reads and regex parsing of frontmatter and docs (expected). Note: the included scripts/eval_skill.py listing contains a clear coding issue (a typo 'metada' / truncated code near platform-readiness checks) which may cause runtime exceptions — this is a bug, not malicious behavior, but you should inspect/fix before running.
Install Mechanism
No install spec and no external downloads; this is instruction-only with a bundled Python script. Risk is low from install mechanics. The only code executed if you follow instructions is the local Python script; there are no network calls or archive extracts shown in the provided files.
Credentials
No environment variables, primary credentials, or config paths are requested. That is proportionate for a static skill-evaluator.
Persistence & Privilege
The skill does not request always:true, does not declare persistence, and provides no indication it will modify other skills or system-wide agent settings. It performs read/analysis of files in the target skill directory only (as expected).
Assessment
This package appears coherent and non-malicious, but treat it like any tool that reads and parses local files: 1) Inspect scripts/eval_skill.py before running (there is a visible typo/truncation that may crash the script). 2) Run it in an isolated disposable workspace or container so the evaluator can only read the target skill directory. 3) If you plan to automate evals in CI, run the script on a small sample first and fix the code bug (the 'metada' typo / truncated function) to avoid false failures. 4) If you need runtime rubric grading that contacts external model providers, confirm and provide only the intended API keys to those providers — this package itself does not require credentials.Like a lobster shell, security has layers — review code before you run it.
latestvk9707vt31tz7z3ggek17s1m3mh839tac
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🧪 Clawdis
Binspython3
