Back to skill
Skillv1.0.0
ClawScan security
Skill Creator Anthropic · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignMar 14, 2026, 4:37 PM
- Verdict
- benign
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill's files and instructions are coherent with a 'skill creator / evaluator' purpose, but bundled scripts read workspace files and can serve them over HTTP — review workspace contents and run in a sandbox before executing.
- Guidance
- This skill appears to do what it says: a toolkit for writing and evaluating skills. Before running any scripts: - Inspect the workspace directory you plan to point the scripts at. The eval-viewer and related scripts will recursively read many files and embed them into an HTML page (including non-text binaries via base64). Remove any secrets, credentials, or files you would not want served to a local browser or accidentally shared. - The viewer script may invoke OS utilities (it calls subprocesses like lsof and may send signals) and will open a local HTTP server and a browser tab. Run it in an isolated/sandboxed environment (temporary VM, container, or dedicated evaluation machine) if you are unsure. - The project is licensed under Apache-2.0 — you can reuse or modify the scripts, but review code before execution. If you only need small parts of the skill, consider extracting and running those scripts locally after inspection rather than running the full workflow. - If you need stronger assurances, ask the maintainer for a minimal-mode that restricts file discovery (or patch generate_review.py to whitelist/blacklist paths) and to avoid invoking system commands. Overall: coherent and useful for its stated purpose, but exercise standard caution about filesystem and subprocess access when running bundled tooling.
Review Dimensions
- Purpose & Capability
- okThe name/description (create, evaluate, and improve skills) aligns with the shipped assets: SKILL.md plus multiple evaluator/packaging scripts (grader, comparator, analyzer, eval viewer, packaging, etc.). There are no unrelated required env vars or binaries declared, and included scripts map to the stated functionality.
- Instruction Scope
- concernSKILL.md instructs the agent to use supplied scripts (e.g., eval-viewer/generate_review.py) to run evaluations and present results. Those scripts recursively read the workspace, embed arbitrary output files into a self-contained HTML page, write feedback.json, open a web browser, and run OS commands (via subprocess) to manage ports. This behavior is consistent with 'showing evaluation results' but will also read and expose any files present in the workspace (notably it only excludes a small set of metadata files). The SKILL.md does not explicitly warn about potential exposure of secrets or other sensitive files in the workspace.
- Install Mechanism
- okNo install spec is provided (instruction-only), and all code is bundled in the skill. The included Python scripts use only stdlib modules and do not download or execute external archives. No remote install/URL downloads were observed in the provided files.
- Credentials
- noteThe skill declares no environment variables or credentials (appropriate). However, the scripts require filesystem access to the workspace and call subprocesses (e.g., lsof, os.kill) and open a local HTTP server and browser. While these are proportionate to the stated purpose (inspecting and serving eval outputs), they can access and expose arbitrary files in the workspace — so lack of declared secrets does not eliminate data-exfiltration risk if a user runs the scripts in a directory containing sensitive files.
- Persistence & Privilege
- okThe skill is not always-on and does not request persistent system privileges. It saves feedback.json and serves a local HTML UI, which are normal for a review tool. It does use subprocesses to manage ports and may write files in the workspace, but it does not request or appear to modify other skills' configs or system-wide agent settings.
