Back to skill
Skillv1.2.4
ClawScan security
Vector Mind Map Fusion · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousApr 27, 2026, 2:06 AM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill's functionality (scanning OpenClaw sessions, vectorizing and persisting memories) matches its description, but there are multiple implementation and deployment choices that are inconsistent or raise security/privacy concerns and merit human review before installation.
- Guidance
- This skill appears to implement the memory-extraction/recall system it claims, but review and precautions are recommended before installing: - Data scope: it scans your OpenClaw session files and writes persistent memory files (brain.db and InfinityDB files). If your sessions contain secrets (passwords, API keys, tokens, private notes), those may be extracted and stored. Consider running in a sandbox, or set SESSIONS_DIR to a safe test directory. - File locations: default storage includes ~/.local/share/neural-memory and an absolute '/workspace/fusion/...' path in code. Confirm and override these paths in the config before running so it doesn't write to unexpected locations. - Pickle risk: the HNSW index is saved/loaded using pickle. Loading pickled files from untrusted sources can execute arbitrary code. Only run this skill on data you control, and avoid using existing brain.hnsw files from unknown origins. - External install commands: SKILL.md asks you to run 'curl https://ollama.com/install.sh | sh' and 'pip install --break-system-packages'. 'curl | sh' and '--break-system-packages' have system-level effects—inspect the installer script and avoid the pip flag if you don't want pip to alter system packages. - Secrets handling: the l1 classifier will detect and classify 'password/secret/token/api_key' text; but there is no obvious automatic redaction before storing. If you need to avoid persisting secrets, add filtering or prevent the skill from scanning live session directories. - Actionable steps: (1) Review the code (esp. src/infinitydb_lite.py, l1_classifier and paths in config); (2) run the skill in an isolated environment or container; (3) set SESSIONS_DIR and NEURALMEMORY_DIR to controlled test directories; (4) remove or audit any existing brain.hnsw before loading; (5) avoid blindly executing the suggested curl | sh installer and the pip flag without inspection. If you want, I can point out exact lines/files that implement the session scanning, brain DB write paths, and pickle load/save calls so you can inspect them more easily.
Review Dimensions
- Purpose & Capability
- noteName/description describe a 3-layer memory extraction/retrieval system and the code implements that: session scanner (L1), consolidator (L2) and archives/index (L3). Access to OpenClaw sessions and a local neural memory DB is expected for this purpose, but the project writes to paths outside the repository (default brain DB in ~/.local/share/neural-memory and an absolute '/workspace/fusion/...' path in the InfinityDB implementation), which is surprising and should be confirmed.
- Instruction Scope
- concernSKILL.md instructs scanning user session JSONL and persisting extracted contents into local DB/files—this will capture whatever is in OpenClaw sessions (including credentials or secrets users may have typed). It also tells users to run external install commands (curl | sh to install Ollama) and to pip install with '--break-system-packages'. The instructions give broad discretion to scan/ingest all sessions and to run scheduled runs (cron-like), which increases privacy risk.
- Install Mechanism
- noteThere is no formal install spec, but the README/SKILL.md asks the user to curl an external installer (ollama.com/install.sh) and to 'ollama pull' large models; it also recommends 'pip install --break-system-packages httpx'. Those external-install steps are normal for requiring Ollama but carry typical network/script risks (curl | sh) and system-level implications (pip flag). The code itself is bundled, so nothing is downloaded at runtime by the skill besides using a local Ollama server.
- Credentials
- concernThe skill declares no required env vars, but the code reads SESSIONS_DIR and NEURALMEMORY_DIR environment variables and defaults to user-local paths (~/.openclaw/... and ~/.local/share/neural-memory). That is logical for a memory skill, but the number and sensitivity of files it reads/writes is high (user session data, brain.db). The classifier also explicitly recognizes 'password/secret/token/api_key' patterns and will persist classified items into its stores unless you change behavior — so credential capture/persistence is a real risk if sessions contain secrets.
- Persistence & Privilege
- concernThe skill persists data outside the project (brain.db in the user's home and InfinityDB files), and some defaults point to absolute paths (e.g., '/workspace/fusion/...') that differ from config constants. It uses pickle to persist/load HNSW indices (brain.hnsw), which means loading a tampered pickle file could execute code. 'always' is false and the skill is not forced into every run, but its filesystem writes and external install suggestions give it persistent on-disk presence and lasting access to stored memories.
