Skill Evaluator
PassAudited by ClawScan on May 1, 2026.
Overview
This skill appears purpose-aligned and benign, with routine caution for running its local Python evaluator and optional third-party package commands.
Use this like a local review tool: run the Python evaluator only on skill directories you intend to inspect, remember it may read those files and create EVAL.md, and only run the optional pip or npx commands if you trust the packages involved.
Findings (2)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running the command lets the evaluator script read and analyze files under the specified skill directory.
The skill instructs users to run an included Python helper script against a local skill directory. This is disclosed and central to the automated evaluation purpose.
python3 scripts/eval-skill.py /path/to/skill
Run it only on skill directories you intend to evaluate, and review the local script/dependencies if using it in a sensitive workspace.
If the user chooses those commands, third-party package code from PyPI or npm may run in their environment.
The documentation discloses a manual PyYAML dependency and an optional external npm-based scanner. These are purpose-aligned, but they involve third-party package execution outside the skill's own bundled files.
`pip install pyyaml` ... `npx skilllens scan <path>`
Use trusted environments, consider pinning package versions, and run the optional npx scanner only if you trust that tool and need the deeper scan.
