Skill Creator (Opencode)
Analysis
The skill mostly matches its skill-building purpose, but its review tooling can embed untrusted outputs into browser-executed HTML and includes a helper that can terminate local processes.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
content = path.read_text(errors="replace") ... data_json = json.dumps(embedded) ... return template.replace("/*__EMBEDDED_DATA__*/", f"const EMBEDDED_DATA = {data_json};")Eval output file contents are embedded directly into a JavaScript assignment in an HTML template. If an evaluated output contains script-breaking text such as a closing script tag, the generated review page could execute attacker-controlled JavaScript when opened.
def _kill_port(port: int) -> None:
"""Kill any process listening on the given port."""
... subprocess.run(["lsof", "-ti", f":{port}"], ...)
... os.kill(int(pid_str.strip()), signal.SIGTERM)The review helper contains a routine that finds and terminates any process using the selected port. The code shown is not limited to processes started by this skill and does not show a user-confirmation boundary.
Read the winner skill's SKILL.md and key referenced files ... Read the winner's transcript ... Read the loser's transcript
The analyzer agent is expected to ingest skill files and transcripts from eval runs. That is necessary for analysis, but evaluated skills and transcripts can contain instructions aimed at the reviewing agent unless explicitly treated as data.
## Requirements - Python 3.9+ - Opencode CLI installed
The documentation discloses local runtime dependencies, while the registry metadata lists no required binaries and no install specification. This is not hidden, but it means users need to read the docs to understand the executable dependency.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
embeds all output data into a self-contained HTML page, and serves it via a tiny HTTP server. Feedback auto-saves to feedback.json in the workspace.
The eval viewer intentionally stores and reuses eval outputs and reviewer feedback. This supports the workflow, but those outputs may include sensitive prompts, files, or generated artifacts.
