Back to skill
Skillv1.1.2
ClawScan security
Pidan Memory · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousMar 8, 2026, 1:09 PM
- Verdict
- suspicious
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's code mostly matches its stated purpose (local vector memory using Ollama + LanceDB), but there are inconsistencies and higher-risk install actions (remote installer via curl|sh) and undeclared environment/binary requirements that you should review before installing.
- Guidance
- This skill appears to implement the described local memory system, but review and consider the following before installing: - Metadata mismatch: The registry lists no required binaries/env-vars, but the hook and docs require python3 and a running Ollama (localhost:11434). The code also expects the OPENCLAW_USER_ID environment variable for permission checks. Make sure you understand and set these before enabling the hook. - Privacy: Enabling the Hook causes automatic capture of message content and writing to ~/.openclaw/workspace/memory. If you enable it, verify the data directory and retention policies, and confirm whether any sensitive content could be recorded. - Environment exposure: handler.ts passes process.env to the spawned Python process. That means any environment variables available to the host process will be visible to the script. Avoid running it in a context containing secrets you don't want exposed. - Installer scripts: The included scripts can install Ollama and pull models. The Linux installer uses curl | sh (remote install script); run such scripts only from trusted sources or inspect them first. Model downloads can be large and the download 'accelerator' script repeatedly kills and restarts downloads — this is unusual but not obviously malicious. - Recommended steps: inspect the full installer script from https://ollama.ai/install.sh before running; run the skill in a sandbox or test environment first; back up and inspect ~/.openclaw/workspace/memory before enabling automatic hooks; and confirm that OPENCLAW_USER_ID will be set by your platform as expected. If you want, I can list the exact files and lines that reference OPENCLAW_USER_ID, the curl|sh install command, and where process.env is forwarded so you can inspect them in detail.
Review Dimensions
- Purpose & Capability
- noteName/description align with the code: the Python/TS files implement a LanceDB + Ollama local memory system with automatic hook-based capture, storage, search, deduplication, and per-user isolation. However, registry metadata claims no required binaries/env-vars while HOOK.md, SKILL.md and the code expect python3 and a running Ollama (localhost:11434). That metadata mismatch is inconsistent and worth flagging.
- Instruction Scope
- concernThe hook (handler.ts + auto_memory.py) will execute on message events and spawn a Python process that receives message content on stdin and may persist data under ~/.openclaw/workspace/memory. The runtime relies on the OPENCLAW_USER_ID environment variable for access control (SKILL.md and code require it), but the skill metadata did not declare this. The code reads/writes only within the ~/.openclaw workspace and calls localhost Ollama for embeddings; it does not appear to call external network endpoints for embedding or exfiltrate data. Still, automatic capture of every message has privacy implications and the hook receives process.env (handler.ts merges process.env into the child's env), so existing environment variables are available to the spawned process.
- Install Mechanism
- concernThere is no formal install spec in the registry, but the package includes helper scripts. scripts/install_ollama.sh performs curl -fsSL https://ollama.ai/install.sh | sh on Linux (a remote installer piped to sh). scripts/download_accelerator.sh repeatedly starts and kills 'ollama pull' invocations to accelerate downloading a model. These scripts execute remote code/operations and spawn background services (ollama serve). Running curl|sh installers and repeated background process manipulation increases risk and should be reviewed/ran only from a trusted system or sandbox.
- Credentials
- concernThe skill does not declare required environment variables in the registry, but the code and SKILL.md depend on OPENCLAW_USER_ID for authentication and optionally respect MEMORY_MODE and MEMORY_DEDUP_AFTER. handler.ts explicitly passes the full process.env into spawned Python processes, exposing any environment variables present to the child. No external API keys are requested, and network calls appear limited to localhost (Ollama) and local lancedb, which is proportionate — but the undeclared reliance on OPENCLAW_USER_ID and passing of process.env are mismatches and a modest risk.
- Persistence & Privilege
- okalways is false and the skill is user-invocable. Installing the Hook (per SKILL.md) grants the skill automatic execution on message events — this is expected for an auto-memory hook. The skill writes to ~/.openclaw/workspace/memory (its own data); it does not modify other skills or global agent configs. No 'always: true' or implicit global privileges were found.
