Back to skill
Skillv1.0.0

ClawScan security

ClawGuard · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignFeb 20, 2026, 2:32 PM
Verdict
benign
Confidence
medium
Model
gpt-5-mini
Summary
ClawGuard's files and runtime instructions are internally consistent with a local, read-only skill auditor and its requested footprint (no env vars, no installs), but there are a couple of small items you should manually verify before trusting it automatically.
Guidance
ClawGuard appears coherent and appropriate for the claimed purpose. Before installing or relying on it: (1) open scripts/scan.py and confirm there are no network calls (look for imports or use of requests, urllib, socket, subprocess that contacts the network, or explicit HTTP calls). (2) Verify scan.py does not execute scanned code or write to locations outside the scanned skill directory. (3) If you rely on 'repo star count' or 'GitHub account age', confirm how those metrics are computed (local git metadata vs remote API). (4) Run the scanner on a copy of a target skill in a sandbox first. Finally, because the SKILL.md intentionally contains prompt-injection examples, be aware those phrases may trigger other automated reviews — this is expected but worth noting.
Findings
[ignore-previous-instructions] expected: The SKILL.md contains example prompt-injection phrases (including 'ignore previous instructions') because ClawGuard detects such patterns. The pre-scan detector flagged this phrase, but in context it is used as a detection example rather than a malicious attempt to hijack the evaluator. Still, such phrases can trip other automated scanners.

Review Dimensions

Purpose & Capability
noteThe skill is described as a local security auditor and requires no env vars or binaries; scan.py and SKILL.md consistently implement scanning of files in a target skill directory. One small mismatch: the 'Repository Trust Score' lists metrics like repo star count and GitHub account age which normally require network/GitHub API access — the package claims 'no external calls', so those metrics must be derived from local git metadata or are aspirational. This is plausible but worth verifying in scan.py if you rely on those exact metrics.
Instruction Scope
noteSKILL.md instructs local use (python3 skills/clawguard/scripts/scan.py <path>) and promises read-only, local analysis. The file contains many prompt-injection example strings (e.g., 'ignore previous instructions') — these appear as detection examples, which is expected, but the presence of raw injection phrases can trigger other static detectors or confuse automated evaluators. Verify scan.py only reads target files and doesn't itself execute untrusted code from scanned skills.
Install Mechanism
okNo install spec; instruction-only with an included scan.py. No downloads, no package managers, and the README/scan.py both claim stdlib-only. That's proportionate for a local scanner.
Credentials
okThe skill declares and appears to need no environment variables, credentials, or config paths. scan.py's SECURITY MANIFEST at the top also states 'Environment variables accessed: none' and 'External endpoints called: none', which aligns with the declared purpose.
Persistence & Privilege
okRegistry flags are normal (not always:true). The skill is user-invocable and does not claim to modify system-wide settings or other skills. Nothing requests elevated or persistent privileges.