Back to skill
Skillv2.0.1

ClawScan security

Local Self-Healing Machine Learning · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousMar 12, 2026, 6:52 AM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill claims to be fully local, telemetry-free, and non‑fingerprinting, but its codebase includes components (deviceId/envFingerprint, optional external installer commands, self-modification hooks, and shell/child_process usage) that contradict those claims — review carefully and run in a sandbox before trusting it with real data or allowing it to modify code.
Guidance
What to consider before installing or running this skill: - The SKILL.md promises 'no fingerprinting' and 'no telemetry', but the code includes files named deviceId.js and envFingerprint.js and other modules that likely collect environment/machine features. Ask the author for the contents of those files or inspect them yourself before trusting the claim. - The skill can modify files and has a self-modify toggle (EVOLVE_ALLOW_SELF_MODIFY). Keep that flag set to false unless you have audited the code and are comfortable allowing autonomous edits. Prefer running in single-run (--run) or review mode before enabling continuous loop. - The dashboard and tools read and serve local data (memory/, assets/), and the dashboard API returns some environment settings. Run the dashboard only on localhost and avoid exposing it to untrusted networks. Back up any important repository data before running solidify or evolution cycles. - The SKILL.md recommends installing Ollama via a curl | sh command. That installer and any pulled models are external network actions and should be treated as separate trust decisions — do not blindly run remote install scripts. - The code uses child_process/execSync/spawn, git log, and other shell interactions. Scripts can execute shell commands (and some scripts reference Feishu notification commands), so inspect scripts that call external commands before running them. - Suggested actions: (1) Review src/gep/deviceId.js, src/gep/envFingerprint.js, and src/gep/hubSearch.js to confirm they do not exfiltrate identifiers or call remote endpoints. (2) Run the skill in a tight sandbox or VM disconnected from sensitive networks and with least privilege. (3) Keep EVOLVE_ALLOW_SELF_MODIFY=false and review any proposed changes before applying (use --dry-run solidify). (4) If you need higher assurance, request a reproducible build or an explanation from the author about how 'no telemetry' is implemented and audited. If you want, I can: (a) summarize suspicious files and where they are used, (b) search the repository for network-sending code or hard-coded endpoints, or (c) walk through specific files (deviceId.js, envFingerprint.js, skillDistiller) and explain exactly what they do.

Review Dimensions

Purpose & Capability
noteThe stated purpose (local self-healing ML for the agent) matches many files (evolve, gep, ml, feedback, knowledge base). However there are surprising files and capabilities that don't cleanly match the 'no fingerprinting / no telemetry' promise: src/gep/deviceId.js and src/gep/envFingerprint.js exist (suggesting machine/environment identification), src/gep/hubSearch.js and skillDistiller imply external discovery/distillation flows, and scripts reference integrations (Feishu). These items are not justified by the SKILL.md claim that 'no machine ID' or 'no fingerprinting' occurs.
Instruction Scope
concernSKILL.md asserts 'zero network calls', but it documents optional Ollama integration and shows an example that runs a curl installer (curl https://ollama.com/install.sh | sh) and ollama pull — both are network operations initiated by user instructions. index.js and scripts use child_process/execSync/spawn, read .env and many local files (memory/, assets/), and the dashboard server exposes local data (including some env settings) via an API with CORS '*'. The code also includes an EVOLVE_ALLOW_SELF_MODIFY flag and mechanisms to solidify/evolve code, meaning the runtime can modify repository files when enabled — this deviates from the 'cannot modify itself or core configs' claim in SKILL.md (the doc claims protection but the option exists).
Install Mechanism
noteThe skill has no install spec in the registry (instruction-only), but the package contains full source and scripts to run. The SKILL.md recommends installing Ollama via a one-liner that pipes curl to sh (high-risk practice) if the user wants embedding support. That optional installer is an external network action and a risky pattern even though it's not mandatory for the skill to run.
Credentials
concernRegistry metadata declares no required env vars or credentials, but SKILL.md and code read many environment variables (EVOLVE_ALLOW_SELF_MODIFY, EVOLVE_STRATEGY, OLLAMA_URL/OLLAMA_EMBED_MODEL, LSHML_DASHBOARD_PORT, and several EVOLVER_*/EVOLVE_* runtime flags). The dashboard's gatherData exposes some environment values and the skill reads a .env file at startup (dotenv). More importantly, presence of envFingerprint.js and deviceId.js indicates the code may compute or store identifiers from the environment despite the 'no fingerprinting' claim — that is disproportionate to the claimed privacy guarantees.
Persistence & Privilege
concernThe skill can run persistently as a daemon (--loop), writes a pid file, maintains persistent knowledge under memory/ and assets/gep/, and implements 'solidify' which writes genes, capsules, and events. There's an explicit EVOLVE_ALLOW_SELF_MODIFY toggle (default false) that suggests the engine can change its own source if enabled. While self-modification and persistent state are plausible for a self-healing system, they are high‑privilege actions and the skill's claim that 'cannot modify itself or core configs' is inconsistent with the existence of these mechanisms.