local_memory
v1.0.0Manage AI conversation memory locally with automatic extraction, retrieval, and manual commands, ensuring privacy without external APIs or fees.
⭐ 0· 68·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description, declared Python dependencies (jieba, scikit-learn, numpy), and implemented behavior (local sqlite DB, TF-IDF embeddings, local jieba tokenizer) are coherent: the code implements local extraction, storage, embedding, retrieval, and manual commands as described.
Instruction Scope
SKILL.md and the code instruct the agent to automatically extract and persist memory from every conversation and to inject retrieved memory into system prompts for subsequent messages. This matches the intended purpose, but the extraction is rule-based and will store any user-provided content (including credentials, IPs, or other secrets) unless the user disables auto_extract or filters content.
Install Mechanism
No install spec is provided (instruction-only install), and no remote downloads or archive extraction are present. The repo includes Python files that depend on common packages; installing those via pip is the expected next step. There are no surprising external URLs or installers.
Credentials
The skill requests no environment variables, no external credentials, and uses only local file paths (db/memory.db and lib/models/). Required permissions are limited to local disk I/O for its own data files.
Persistence & Privilege
The skill is configured to trigger on every message (skill.json pattern ".*" and SKILL.md '自动全局触发'), and the platform default allows autonomous invocation. While not set to always:true, this combination means the skill will run automatically for normal conversations and persist extracted memories locally — increasing privacy exposure. The skill does not modify other skills or system-wide settings.
Assessment
This skill appears to do exactly what it says (local memory extraction and retrieval). Before installing, consider the following: (1) it will automatically save anything users write into db/memory.db — do not send secrets (passwords, API keys, private server credentials) in chats or tests; (2) if you want to avoid automatic storage, disable auto_extract and/or auto_inject in config.json or set expire_days low; (3) secure the database file (permissions, backups, encrypted disk) if it will contain sensitive info; (4) installing requires Python packages (jieba, scikit-learn, numpy) — prefer installing in a virtualenv; (5) review and run test.py only with non-sensitive example data; (6) if you need stronger filtering of sensitive tokens, add explicit secret-detection or opt-out rules before enabling auto-extraction. Overall the skill is internally consistent and local-only, but its automatic capture of all conversation content is the primary privacy risk.Like a lobster shell, security has layers — review code before you run it.
latestvk97613xf42992z1c7bje3fphp983met8
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
