Back to skill
Skillv1.0.0
ClawScan security
Brain CMS · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignFeb 23, 2026, 2:34 AM
- Verdict
- benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill is internally consistent with its stated purpose: it installs local Python scripts, a LanceDB vector store, and uses a local Ollama instance for embeddings/LLM consolidation; nothing requested or installed appears out of scope or covert.
- Guidance
- This skill appears to implement what it claims: a local, neuroscience-inspired memory system. Before installing, consider these practical steps: 1) Review the included install.py and the four brain_scripts (they're bundled in the skill) to confirm you understand the file changes. 2) Backup ~/.openclaw/workspace (or at least AGENTS.md and any existing memory/ directory) — the installer will create/copy files and append to AGENTS.md. 3) Ensure you want to run local Ollama: the code calls http://localhost:11434 for embeddings and generation and SKILL.md recommends pulling the nomic-embed-text and llama3.2:3b models; those model downloads happen via Ollama and pip package installs happen from PyPI. 4) If you do not run Ollama locally, embeddings/generation will fail but no secrets are leaked. 5) The installer uses shell commands (venv creation and pip install) and subprocess.run(shell=True); run only if you trust the source or after manually inspecting the scripts. The only notable metadata inconsistency is that registry 'Requirements' showed none while SKILL.md declares required binaries (python3, ollama) — prefer the SKILL.md requirements when preparing your environment.
Review Dimensions
- Purpose & Capability
- noteOverall coherent: the code, installer, and SKILL.md all implement a local multi-layer memory system that indexes .md schema files, runs semantic search, and performs NREM/REM consolidation using local Ollama and a LanceDB vectorstore. One minor inconsistency: the registry-level 'Requirements' block shown to the scanner lists no required binaries, whereas the SKILL.md metadata requires 'python3' and 'ollama' — the latter is necessary for embeddings/LLM work and is consistent with the scripts.
- Instruction Scope
- noteInstructions and scripts stay within the claimed scope: they read/write files under ~/.openclaw/workspace (memory/, memory_brain/), create INDEX.md and ANCHORS.md, index schemas, run semantic queries, compress logs, and append REM updates to schema files. They do modify AGENTS.md (append CMS instructions) and write to workspace files — expected for this installer but worth noting because it changes workspace state.
- Install Mechanism
- okNo high-risk remote downloads or obscure URLs. The installer creates a venv and pip-installs lancedb, numpy, pyarrow, requests (PyPI), and SKILL.md suggests using 'ollama pull' to fetch models (standard Ollama behavior). All installs are local and traceable; the installer uses subprocess.run(shell=True) for convenience, which is normal for local installers but means you should inspect the script before running.
- Credentials
- okNo environment variables or external credentials are requested. Runtime network usage is limited to a local Ollama HTTP endpoint (http://localhost:11434) for embeddings and generation, and to standard package downloads (pip/ollama). The scripts do not attempt to exfiltrate data to remote endpoints.
- Persistence & Privilege
- noteThe skill is not force-included (always:false) and runs only when invoked. It persists files under ~/.openclaw/workspace (creates memory/ and memory_brain/), copies scripts there, and appends to AGENTS.md — normal for an installer but it does alter workspace files and should be allowed only if you accept those changes.
