Skill Evaluator
PassAudited by VirusTotal on May 12, 2026.
Overview
Type: OpenClaw Skill Name: skill-evaluator Version: 1.0.0 The OpenClaw skill 'skill-evaluator' is designed to assess the quality and security of other skills. Its `SKILL.md` provides clear, task-oriented instructions for the AI agent to execute a local Python script (`scripts/eval-skill.py`) on a specified skill directory. The Python script performs static analysis and content checks, including looking for hardcoded credentials and undocumented environment variables within the *target skill* being evaluated, which is a legitimate security function for an evaluator. There is no evidence of prompt injection, data exfiltration, malicious execution, or persistence mechanisms targeting the agent's environment or unrelated data.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running the command lets the evaluator script read and analyze files under the specified skill directory.
The skill instructs users to run an included Python helper script against a local skill directory. This is disclosed and central to the automated evaluation purpose.
python3 scripts/eval-skill.py /path/to/skill
Run it only on skill directories you intend to evaluate, and review the local script/dependencies if using it in a sensitive workspace.
If the user chooses those commands, third-party package code from PyPI or npm may run in their environment.
The documentation discloses a manual PyYAML dependency and an optional external npm-based scanner. These are purpose-aligned, but they involve third-party package execution outside the skill's own bundled files.
`pip install pyyaml` ... `npx skilllens scan <path>`
Use trusted environments, consider pinning package versions, and run the optional npx scanner only if you trust that tool and need the deeper scan.
