Clawhub Skill

WarnAudited by ClawScan on May 10, 2026.

Overview

This memory skill mostly matches its stated purpose, but it needs review because it persistently stores and reuses conversation data, exposes a local memory API, and includes underdocumented command/cloud-LLM paths despite strong privacy claims.

Install only if you want a persistent local memory service. Before enabling it, decide whether conversation memories may be stored long-term, whether you will use external LLM providers, and how you will protect or delete the local memory database. Avoid adding an LLM API key or enabling command-based backends unless you understand what data and commands they can access.

Findings (7)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

If enabled or misconfigured, the skill could run local commands under the user's account, which is higher-impact than ordinary memory storage.

Why it was flagged

The static scan includes code showing the package can launch a command selected from aiConfig and pass prompt content to it. The public setup documents API-key LLM configuration, not a local command backend or allowlist.

Skill content
const proc = spawn(aiConfig.cli_command, [...aiConfig.cli_args, prompt], {
Recommendation

Document this backend, require explicit opt-in, validate or allowlist the executable, and require user approval before running configured commands.

What this means

A user may believe memory data never leaves the device even when enabling or following documented cloud LLM workflows.

Why it was flagged

The README makes categorical local/no-upload claims while also documenting optional external LLM use and an example that sends retrieved memories into an OpenAI request.

Skill content
"100% local memory. Zero cloud upload." ... "If you add an LLM" ... "client = OpenAI()" ... "memory_context = \"\n\".join([m[\"content\"] for m in results[\"memories\"]])"
Recommendation

Replace absolute privacy claims with mode-specific language, clearly warn when memories or queries may be sent to a configured provider, and provide local-only defaults.

What this means

Incorrect, sensitive, or prompt-like text stored as a memory could be reused later and steer the agent's response or expose private context.

Why it was flagged

The integration example places raw retrieved memory content into a system-context message, making stored conversation content influential in future model behavior.

Skill content
{"role": "system", "content": f"User context:\n{memory_context}"}
Recommendation

Treat retrieved memories as untrusted context, label them separately from system instructions, add review/delete controls, and document retention and poisoning risks.

What this means

Other local software could potentially read or write the user's memory store if the service is running and no authentication is enforced.

Why it was flagged

The documented local API stores and retrieves memories, but the examples show no authentication or permission boundary beyond localhost.

Skill content
Base URL: `http://localhost:18800` ... `POST` `/memories` ... `POST` `/search` ... `-H "Content-Type: application/json"`
Recommendation

Document the API security model, bind only to localhost by default, add an access token or origin restrictions, and provide clear stop/disable instructions.

What this means

The configured LLM key may authorize paid API usage and access to any prompts sent to that provider.

Why it was flagged

The skill supports optional provider credentials for OpenAI-compatible LLM features. This is purpose-aligned, but users should know they are storing a provider key locally.

Skill content
"api_base": "https://api.deepseek.com/v1", "api_key": "sk-xxx", "api_model": "deepseek-chat"
Recommendation

Use a scoped/low-limit provider key, protect the config file, and avoid enabling external LLM features for sensitive memories unless needed.

What this means

Users must trust the npm package and its startup behavior outside the registry's declared install contract.

Why it was flagged

The skill directs users to install an npm package and start a local service, but the registry section says there is no install spec and declares no required binaries.

Skill content
npm install @cc-soul/openclaw
# API auto-starts at localhost:18800
Recommendation

Verify the package source, pin versions where possible, and have the publisher declare Node/npm and install behavior in metadata.

What this means

The skill may continue updating memory state after setup while the local service is running.

Why it was flagged

The service performs ongoing background memory processing. This is disclosed and aligned with a memory engine, but it is persistent behavior users should notice.

Skill content
background ... every minute: memory decay ... every hour: FSRS consolidation ... every 6h: L1→L2 topic clustering ... every 12h: L2→L3 mental model
Recommendation

Provide clear start, stop, disable, reset, and data-deletion instructions.