Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
ClawMemory
v1.0.0Sovereign agent memory engine — self-hosted, privacy-first SQLite store with LLM-based fact extraction (GLM-4.7), hybrid BM25+vector search, contradiction re...
⭐ 0· 45·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The skill's name/description (self-hosted memory engine with LLM extraction and a plugin) aligns with the SKILL.md instructions: a local HTTP server, fact extraction, BM25/vector search, and an OpenClaw plugin. However, the SKILL.md assumes you will build/run Go and Node projects, use curl/python3 for CLI examples, and optionally run Ollama and Turso — none of these required binaries or credentials are declared in the registry metadata, which is an inconsistency that should be resolved.
Instruction Scope
Instructions direct the agent/operator to run a local server that auto-captures conversation turns (plugin pre-turn/post-turn behavior) and store those as structured facts. That behavior is expected for a memory engine, but it means the skill will capture and persist conversation content by default. The SKILL.md also documents optional Turso cloud sync and an extractor endpoint requiring an API key — both of which can cause data to leave the local host if configured. The instructions do not explicitly warn operators about these privacy/exfiltration risks.
Install Mechanism
The skill is instruction-only (no install spec), which is lower automated risk. However, it instructs to git clone a GitHub repo and run go build and npm install/build for the plugin — fetching and building external source code is an explicit manual step and pulls arbitrary code from the listed repo (https://github.com/clawinfra/clawmemory). The registry metadata does not declare this external dependency; users should inspect that repository before building/ running.
Credentials
Registry metadata declares no required environment variables or primary credential, but SKILL.md/config.json show fields for extractor.apiKey and store.tursoToken (sensitive secrets). The skill's operation (LLM extractor endpoint and optional Turso sync) legitimately requires secrets if you enable those features, so the absence of declared env requirements in the registry is an inconsistency and a potential blind spot for users who may inadvertently provide secrets or point the server at remote endpoints.
Persistence & Privilege
The skill does not request always:true and does not modify other skills automatically per the registry flags. But the provided OpenClaw plugin is designed to be installed into the agent and will auto-inject pre-turn memory and auto-capture post-turn conversation data, creating persistent integration into the agent's runtime behavior. That persistence is expected for a memory plugin but is a meaningful privacy/persistence change an operator must opt into by installing the plugin.
Scan Findings in Context
[no_code_files_to_scan] expected: The regex scanner had nothing to analyze because this is an instruction-only skill (SKILL.md only). That is consistent with an instruction-only skill, but it means there's no automated scan of the external repo code that the SKILL.md instructs users to fetch and build.
What to consider before installing
What to check before installing:
- Inspect the GitHub repository (https://github.com/clawinfra/clawmemory) before git-cloning and building; the SKILL.md instructs you to build/run code from that repo.
- Expect to need tools not listed in the registry metadata: git, Go toolchain (go build), Node/npm (plugin), curl, python3 (examples), and optionally Ollama for embeddings. Ensure those binaries are available and trusted.
- Be aware the plugin auto-captures conversation turns (post-turn) and injects memory into the system prompt (pre-turn). If you enable the plugin, it will persist user/assistant content into the local SQLite store by default.
- If you enable Turso sync or point extractor.endpoint at a remote LLM service, conversation content and extracted facts may be transmitted off-host. Only supply API keys/tokens (extractor.apiKey, tursoToken) for endpoints you trust; the skill metadata does not declare these secrets but the config supports them.
- If you want strict local-only operation: keep tursoUrl empty, run a local extractor/embedding service (Ollama or other on localhost), and verify server is bound to localhost (port 7437) and not exposed externally.
- Prefer running builds and the server in an isolated environment (container or dedicated VM) until you've vetted the code and behavior.
Given the coherent purpose but the metadata/instruction discrepancies and the potential for inadvertent data sync, proceed only after code and config review.Like a lobster shell, security has layers — review code before you run it.
latestvk9713hjgb8p3rca1645s9yawjd83s1x4
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
