Clawhub Skill
WarnAudited by ClawScan on May 10, 2026.
Overview
This memory skill mostly matches its stated purpose, but it needs review because it persistently stores and reuses conversation data, exposes a local memory API, and includes underdocumented command/cloud-LLM paths despite strong privacy claims.
Install only if you want a persistent local memory service. Before enabling it, decide whether conversation memories may be stored long-term, whether you will use external LLM providers, and how you will protect or delete the local memory database. Avoid adding an LLM API key or enabling command-based backends unless you understand what data and commands they can access.
Findings (7)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If enabled or misconfigured, the skill could run local commands under the user's account, which is higher-impact than ordinary memory storage.
The static scan includes code showing the package can launch a command selected from aiConfig and pass prompt content to it. The public setup documents API-key LLM configuration, not a local command backend or allowlist.
const proc = spawn(aiConfig.cli_command, [...aiConfig.cli_args, prompt], {Document this backend, require explicit opt-in, validate or allowlist the executable, and require user approval before running configured commands.
A user may believe memory data never leaves the device even when enabling or following documented cloud LLM workflows.
The README makes categorical local/no-upload claims while also documenting optional external LLM use and an example that sends retrieved memories into an OpenAI request.
"100% local memory. Zero cloud upload." ... "If you add an LLM" ... "client = OpenAI()" ... "memory_context = \"\n\".join([m[\"content\"] for m in results[\"memories\"]])"
Replace absolute privacy claims with mode-specific language, clearly warn when memories or queries may be sent to a configured provider, and provide local-only defaults.
Incorrect, sensitive, or prompt-like text stored as a memory could be reused later and steer the agent's response or expose private context.
The integration example places raw retrieved memory content into a system-context message, making stored conversation content influential in future model behavior.
{"role": "system", "content": f"User context:\n{memory_context}"}Treat retrieved memories as untrusted context, label them separately from system instructions, add review/delete controls, and document retention and poisoning risks.
Other local software could potentially read or write the user's memory store if the service is running and no authentication is enforced.
The documented local API stores and retrieves memories, but the examples show no authentication or permission boundary beyond localhost.
Base URL: `http://localhost:18800` ... `POST` `/memories` ... `POST` `/search` ... `-H "Content-Type: application/json"`
Document the API security model, bind only to localhost by default, add an access token or origin restrictions, and provide clear stop/disable instructions.
The configured LLM key may authorize paid API usage and access to any prompts sent to that provider.
The skill supports optional provider credentials for OpenAI-compatible LLM features. This is purpose-aligned, but users should know they are storing a provider key locally.
"api_base": "https://api.deepseek.com/v1", "api_key": "sk-xxx", "api_model": "deepseek-chat"
Use a scoped/low-limit provider key, protect the config file, and avoid enabling external LLM features for sensitive memories unless needed.
Users must trust the npm package and its startup behavior outside the registry's declared install contract.
The skill directs users to install an npm package and start a local service, but the registry section says there is no install spec and declares no required binaries.
npm install @cc-soul/openclaw # API auto-starts at localhost:18800
Verify the package source, pin versions where possible, and have the publisher declare Node/npm and install behavior in metadata.
The skill may continue updating memory state after setup while the local service is running.
The service performs ongoing background memory processing. This is disclosed and aligned with a memory engine, but it is persistent behavior users should notice.
background ... every minute: memory decay ... every hour: FSRS consolidation ... every 6h: L1→L2 topic clustering ... every 12h: L2→L3 mental model
Provide clear start, stop, disable, reset, and data-deletion instructions.
