Local Memory Search
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill is a local memory-search tool, but its code does not match its description: it uses an undeclared Ollama embedding service and stores broad memory/knowledge contents in a persistent local index.
Install only if you are comfortable with your OpenClaw MEMORY.md, memory/*.md, and knowledge/**/*.md contents being indexed into ~/.openclaw/memory_index.json and processed by a local Ollama service. Verify that Ollama and the embedding model are trusted and available, and consider deleting the index when it is no longer needed.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A user may believe the skill is a simple dependency-free TF-IDF search, while it actually sends memory text to a local Ollama service for embeddings.
The user-facing description claims TF-IDF and no dependencies/APIs, but search.py uses Ollama embeddings via a CLI or localhost API. This mismatch could cause users to trust a different data flow and dependency profile than the skill actually uses.
No external API needed. ... - TF-IDF based semantic matching ... - Zero external dependencies
Update the documentation and metadata to clearly state the Ollama dependency, model requirement, local API fallback, and actual indexing method.
The skill may fail unexpectedly or process memory content through an unreviewed local service the user did not realize was involved.
The code invokes an undeclared local binary/API service. Registry requirements say no required binaries and SKILL.md says no external dependencies, so users are not told what local service/model must be trusted.
subprocess.run(['ollama', 'embed', EMBEDDING_MODEL, text], ...); ... "http://localhost:11434/api/embed"
Declare Ollama and the nomic-embed-text model as requirements, and explain the CLI/API fallback behavior.
Private memory or knowledge-base content may be duplicated into a persistent local index and later surfaced in search results.
The build step indexes memory and knowledge markdown files and stores raw text chunks plus embeddings in a persistent JSON file. The documentation does not clearly describe the knowledge path, plaintext storage, retention, exclusions, or cleanup.
MEMORY_PATHS = [f"{WORKSPACE}/MEMORY.md", f"{WORKSPACE}/memory/*.md", f"{WORKSPACE}/knowledge/**/*.md"]; INDEX_PATH = os.path.expanduser("~/.openclaw/memory_index.json"); ... 'text': chunk['text'] ... json.dump(index, f)Document exactly which files are indexed, whether raw text is stored, how to exclude paths, and how to delete or rebuild the index safely.
