OpenClaw Skill Reviewer
Analysis
The skill is a coherent instruction-only reviewer, but it tells the agent to test scripts from skills under review without clear sandboxing or approval safeguards.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
**For skills with scripts:** - Test scripts to ensure they work correctly - Verify output matches expected format - Check error handling
This directs the agent to execute or test scripts contained in the skill being reviewed, but the artifacts do not require sandboxing, prior static inspection, least-privilege execution, or user approval.
python3 /home/yupeng/.npm-global/lib/node_modules/openclaw/skills/skill-creator/scripts/package_skill.py <skill-path>
The validation workflow relies on a hardcoded, user-specific external script that is not bundled with this skill, so its provenance and behavior are outside the provided artifacts.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
Example: If skill generates AGENTS.md templates, compare with `/home/yupeng/.openclaw/workspace/AGENTS.md`
The skill suggests using a local persistent agent/workspace instruction file as the reference source. This is purpose-aligned for template verification, but the path is hardcoded and may contain local context rather than a clean public specification.
