Openclaw Memories
WarnAudited by ClawScan on May 10, 2026.
Overview
The skill mostly matches its memory/search purpose, but its LLM observer and search implementation can expose chats, API keys, or unrelated memories in ways users may not expect.
Review before installing. If you use it, pass the correct provider key explicitly instead of relying on environment fallback, avoid running Observer on sensitive conversations unless you accept remote LLM processing, and consider fixing or disabling the Indexer search until query filtering is verified.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
An AI-provider API key could be exposed to the wrong third-party provider or appear in request logs outside the account boundary the user intended.
The same fallback key is used for all provider branches. If the selected provider does not match the available environment key, an OpenAI key can be sent to Anthropic or Gemini, or an Anthropic key can be sent to OpenAI or Gemini.
const key = this.config.apiKey || process.env.OPENAI_API_KEY || process.env.ANTHROPIC_API_KEY || ''; ... Authorization: `Bearer ${key}` ... 'x-api-key': key ... generateContent?key=${key}Use an explicit apiKey for the chosen provider, or patch the code to read only provider-specific variables such as OPENAI_API_KEY for OpenAI, ANTHROPIC_API_KEY for Anthropic, and GEMINI_API_KEY for Gemini. The skill metadata should also declare these credentials.
Conversation text may be transmitted to a remote LLM provider even when the user has not successfully configured a provider credential.
The Observer builds a prompt from conversation messages and posts it to a remote LLM endpoint even when the resolved key is an empty string; the docs say an API key is required, but the code does not stop before sending.
const key = this.config.apiKey || process.env.OPENAI_API_KEY || process.env.ANTHROPIC_API_KEY || ''; ... body: JSON.stringify({ messages: [{ role: 'user', content: prompt }], max_tokens: 2000 })Do not run Observer on sensitive conversations until the code checks for a valid provider-specific key before making any network request. Prefer explicit user approval before sending conversation history to remote LLM APIs.
A search for one topic could surface unrelated memory entries, potentially bringing private or irrelevant stored context into an agent response.
The Indexer calls search with `WHERE content MATCH ?`, but this mock DB wrapper only implements equality filters. That means the query text may be ignored and the first indexed memory chunks can be returned regardless of whether they match.
// WHERE col = ? const w = sql.match(/WHERE\s+(\w+)\s*=\s*\?/i); if (w && params.length) rows = rows.filter((r: any) => r[w[1]] === params[0]);
Use a real FTS implementation or add MATCH handling and tests that verify search results actually match the query. Keep indexed workspaces narrow and avoid storing highly sensitive information in memory files.
Users may need to do extra verification that the installed npm package matches the reviewed source.
The provided package files show no postinstall script or runtime dependencies, but registry provenance is incomplete while the docs reference an npm package and GitHub repository.
Source: unknown; Homepage: none; Install specifications: No install spec — this is an instruction-only skill.
Install a pinned version from a trusted registry, compare it with the referenced repository, and avoid installing if the package source cannot be verified.
