Back to skill
Skillv1.0.0
ClawScan security
Local Memory Search · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousFeb 26, 2026, 8:01 AM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill largely matches a local-memory search tool, but there are clear mismatches between its README and the actual code (claims TF‑IDF and 'no external dependencies' while the code calls an embedding binary or localhost API), so you should review before running on sensitive memory files.
- Guidance
- This skill will read all OpenClaw memory files under ~/.openclaw and write an index at ~/.openclaw/memory_index.json — that's expected for a local search tool. However: (1) the README claims TF‑IDF and 'no external dependencies', but the script uses embeddings and calls an external embedding provider (runs the 'ollama' CLI or posts to http://localhost:11434). (2) Before running, verify whether you have or will install 'ollama' and how that service is configured (it might forward data elsewhere). If you want a pure TF‑IDF implementation that never calls external binaries or services, review or modify the code to replace get_embedding with a local TF‑IDF calculator. If you proceed, avoid running it on machine memory containing highly sensitive data until you confirm the embedding provider is local and trustworthy.
Review Dimensions
- Purpose & Capability
- concernThe stated purpose (local semantic search over OpenClaw memory files) matches what the script does (reads ~/.openclaw/workspace and returns ranked snippets). However SKILL.md claims TF‑IDF based matching and 'zero external dependencies', while search.py actually asks for embeddings and computes cosine similarity. That is an incoherence: the code requires an embedding provider (ollama CLI or a local HTTP API) which is not documented in SKILL.md.
- Instruction Scope
- concernThe instructions tell you to run search.py and build an index; the code will: read all files under ~/.openclaw/workspace (MEMORY.md, memory/*.md, knowledge/**/*.md), create/writeto ~/.openclaw/memory_index.json, call an external 'ollama' CLI via subprocess or fall back to an HTTP POST to http://localhost:11434/api/embed to obtain embeddings, then compute cosine similarities. That means your local memory content will be passed to an external binary or to a local HTTP service. The SKILL.md omission of this behavior is problematic.
- Install Mechanism
- noteThere is no install specification (lowest risk), but the script relies on an external binary/service ('ollama') or a local embedding HTTP endpoint. SKILL.md states 'No external API needed' and 'Zero external dependencies' which is inaccurate given the code's runtime dependency on an embedding provider.
- Credentials
- noteThe skill requests no environment variables or credentials. It only reads/writes files under the user's home (~/.openclaw). That file access is proportional to a local memory search tool, but you should note it will transmit memory text to the embedding provider (ollama or localhost API), which could expose sensitive content depending on how that provider is configured.
- Persistence & Privilege
- okalways is false and the skill does not request system-level privileges. It writes an index file at ~/.openclaw/memory_index.json and does not modify other skills or global agent config. No persistent agent-wide privilege escalation is requested.
