Back to skill
Skillv2.4.0

ClawScan security

Reckit · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousMar 4, 2026, 10:59 PM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill appears to implement a legitimate code-verification tool, but there are notable mismatches and privacy/scope concerns (claims 'no external tools required' while shipping many scripts that detect/use external tools, a dashboard that scans ~/Projects, and telemetry-related scripts) — review before running on sensitive data or giving it broad filesystem/network access.
Guidance
This skill mostly does what it says (a multi-gate verification framework) but has several red flags you should handle before installing or running it: 1) Source/provenance: The skill source/homepage are unknown. Prefer skills with a traceable repo and maintainer. 2) Audit the scripts first: review scripts/* and assets/dashboard/server.mjs for any network endpoints, telemetry, or destructive file operations before running. The package contains many shell scripts that will be executed — read them. 3) Telemetry: locate scripts/telemetry.sh and grep for network POST/PUT/ curl/ fetch calls. If telemetry is present, ask what is sent and to where; disable it if you don't want data leaving your host. 4) Run in a sandbox: initially run the skill in an isolated environment (container, disposable VM, or a dedicated non-sensitive workspace) because it will scan project directories (default ~/Projects) and can write .wreckit/ and generated CI files. 5) Tooling expectations: it attempts to detect and (optionally) invoke external tools (stryker, mutmut, valgrind, etc.). Ensure you understand and control what it will install or execute; prefer installing required tools yourself in a controlled way or rely on the AI fallbacks only after inspection. 6) Agent config & spawning: it asks for agent subagent spawning (maxSpawnDepth, children limits). Only enable these features if you understand the platform subagent model and are comfortable with autonomous subagent execution. 7) Least privilege: avoid running this skill on systems with secrets, credentials, or production data. If you must audit sensitive repos, isolate them and disable any telemetry/network calls. If you want, I can: (A) scan the scripts for network calls and list lines that call curl/fetch/sockets/telemetry, (B) summarize what each script will modify on disk, or (C) produce a minimal safe-run checklist (commands to run the skill in a container and what to mount).

Review Dimensions

Purpose & Capability
concernThe name/description (bulletproof code verification, agent-driven) aligns with the included scripts and gate docs (mutation testing, type checks, fuzzing, SAST). However the description explicitly claims "no external tools required" while many scripts detect/expect external tools (Stryker, mutmut, valgrind/ASAN, go test -race, etc.) and will call network registries (check-deps). The registry metadata declares no required binaries/env but the runtime clearly uses HOME and may call out to package managers and remote registries. This inconsistency (claimed zero external dependencies vs. many optional/required tool paths) is unexplained and increases risk.
Instruction Scope
concernSKILL.md and the scripts instruct the agent to read and operate on a project's filesystem (golden fixtures, .wreckit/, IMPLEMENTATION_PLAN.md, tests, CI files) which is expected for a verifier. But the included dashboard/server will auto-scan ~/Projects (or a user-supplied watch dir) and aggregate .wreckit/dashboard.json across multiple projects — a broad filesystem sweep that could read many repos. The repo also contains telemetry.sh and references to telemetry in scripts/run-all-gates.sh; SKILL.md doesn't declare any external telemetry endpoints or what data is sent. The orchestrator/swarm model expects spawning subagents and changing agent config (agents.defaults.subagents), which grants the skill broad runtime reach; instructions that spawn parallel workers and run arbitrary analysis increase the surface for accidental/exfiltrative behavior if not sandboxed.
Install Mechanism
okThere is no install spec (instruction-only), which avoids an automatic network download/install step. All runnable artifacts are included as scripts and assets in the skill bundle. This reduces supply-chain download risk, but means executing the skill will run local shell scripts and Node code supplied by the skill — those scripts must be audited before execution.
Credentials
concernRegistry metadata lists no required environment variables or credentials, yet SDKs/scripts implicitly use environment data (process.env.HOME in the dashboard server), and many gates/scripts will probe the host for installed tools and networks (npm, pip, cargo, valgrind, Stryker, registries). The skill also provides a telemetry script but does not declare telemetry endpoints or ask explicit permission. Requiring modification of agent config (agents.defaults.subagents) to enable spawning is another effective capability change not represented in the declared environment/permissions.
Persistence & Privilege
noteThe skill is not force-included (always:false) and does not declare elevated privileges. It does, however, expect the orchestrator/subagent capability (sessions_spawn and maxSpawnDepth >= 2) and instructs the user to set agent config. The skill includes scripts that can write files into a repo (e.g., generated CI workflow, .wreckit proof bundles). These behaviors are normal for a build/audit tool but mean the skill will create files in scanned repos if run — run it in a controlled/sandboxed workspace if you don't want repo mutation.