Back to skill
Skillv0.1.3
ClawScan security
Memory Core · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignMar 5, 2026, 12:36 PM
- Verdict
- benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's code, configuration, and runtime instructions are coherent with its stated purpose (a LanceDB-based long-term memory for OpenClaw); the main privacy/security consideration is that memory text may be sent to cloud embedding providers unless you choose a local provider.
- Guidance
- This skill appears to do what it says: store and retrieve memories locally with optional cloud embeddings. Before installing, decide whether you are comfortable having memory text sent to a cloud embedding provider (default: siliconflow). If you prefer local-only operation, configure embedding_provider to 'ollama' (point to a local Ollama) or 'local_mock'. Inspect ~/.openclaw/openclaw.json because the skill will try to read it to auto-load an embedding API key and to infer agent models; if that file contains secrets you don't want accessed, remove or separate them. Confirm you are willing to install the declared Python packages and that storing the LanceDB file under the skill directory (or a configured path) is acceptable. If you need stronger guarantees, run the skill in an isolated environment or restrict outbound network access so embeddings cannot be sent to the cloud.
Review Dimensions
- Purpose & Capability
- okName/description match the implementation: code provides ingest/retrieve/forget backed by LanceDB, intent/scene classification, and embedding calls. No unrelated binaries, unexplained credentials, or excessive dependencies are requested.
- Instruction Scope
- noteRuntime instructions are limited to running the provided Python CLI under the skill directory. The code will read ~/.openclaw/openclaw.json (to auto-load a siliconflow API key and to infer agent model) and will persist a LanceDB file under the skill directory (default data/memory.lance). Importantly, text passed to ingest/retrieve is sent to the configured embedding provider (default: siliconflow cloud) via HTTP requests — this is expected behavior but is a privacy-relevant network transmission.
- Install Mechanism
- okNo install script is included (instruction-only install), and declared Python dependencies (lancedb, numpy, requests) match the code. Nothing is downloaded from arbitrary URLs or written outside the skill's directory besides reading ~/.openclaw/openclaw.json.
- Credentials
- noteThe skill declares no required env vars but supports MEMORY_CORE_* env overrides and a secret embedding_api_key in config.json/skill.json. It also attempts to read ~/.openclaw/openclaw.json to find a siliconflow API key and agent model; reading that file is explainable for auto-config, but users should be aware it reads a user config file that may contain secrets for other providers.
- Persistence & Privilege
- okalways=false and the skill does not try to persist beyond its own data directory. It creates a local LanceDB file under the skill root (default data/memory.lance) and does not modify other skills or system-wide settings.
