Back to skill
Skillv1.0.0

ClawScan security

Improvement Learner · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousApr 5, 2026, 5:18 PM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill's code and docs match its stated purpose, but it relies on undocumented external dependencies (lib.* imports) and an external 'claude' CLI invocation which are not declared in the SKILL.md/metadata — this mismatch deserves caution before running.
Guidance
What to consider before installing/running: - The skill appears to do what it says (evaluate and auto-improve SKILL.md), but the Python scripts import external modules (lib.common, lib.pareto) that are not included; running may fail or silently import code from an unexpected repo root. Verify those dependencies exist and inspect them. - The script will call a local 'claude' CLI via subprocess.run when available. If you have a 'claude' binary configured, skill text may be sent through that client — treat SKILL.md and any files you point it at as potentially sent to that service. If you don't want that, run with the --mock flag or ensure 'claude' is not on PATH. - The tools write memory and report files to directories you specify (memory-dir, output); review and choose those paths to avoid exposing sensitive data. - Optional plotting requires matplotlib/numpy; tests expect a Python test runner. Run in a sandbox or isolated environment first to confirm behavior. - If you plan to let the agent invoke this autonomously, be aware the ability to call an external LLM client increases blast radius; consider restricting execution or reviewing the code paths that call subprocess.run. - To raise confidence to 'benign', provide the missing lib.* implementations or confirm they come from a trusted upstream, and validate that 'claude' usage is acceptable for your environment.

Review Dimensions

Purpose & Capability
noteThe SKILL.md, CLI examples, and Python scripts all implement a self-improvement / evaluation loop as described. However, the scripts import lib.common and lib.pareto from a repo root that is not included in the bundle; those external libraries are required for normal operation but are not declared in the skill metadata. The code also expects a local 'claude' CLI for LLM-based judging (with a regex fallback).
Instruction Scope
noteRuntime instructions only ask you to run the included scripts (evaluate, self-improve, track progress). The scripts read SKILL.md and reports directories and write memory and report files. They do not instruct access to unrelated system paths or secrets, but they do call an external LLM CLI ('claude') via subprocess, which will send skill content to that client when available.
Install Mechanism
okThere is no install spec (instruction-only plus included scripts). No remote downloads or archive extraction are present in the bundle itself, reducing installation risk. However, runtime requires Python and optional plotting libs (matplotlib/numpy) that are not declared.
Credentials
noteThe skill declares no required environment variables or credentials. Nevertheless, it invokes an external LLM client ('claude') if present, which is an undocumented runtime dependency and could transmit evaluation content to that service. No secrets are requested by the skill itself.
Persistence & Privilege
okalways is false and the skill does not request system-wide privileges. It writes memory files to a user-specified memory-dir and report files to output directories; it does not modify other skills or global agent configuration.