Elite Longterm Memory
Local vector memory system with LanceDB + Pure JS embedding. No native modules or external APIs required.
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 1 · 451 · 3 current installs · 3 all-time installs
by@LHMiles
MIT-0
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The name/description advertise 'no native modules or external APIs required' and 'pure local', but the code and README/SKILL.md instruct users to run Ollama, pull nomic-embed-text, and run npm install. The package-lock lists @lancedb/lancedb and apache-arrow (optional native binaries). The ONNX embedder downloads model files over the network using curl. These requirements contradict the 'no external APIs/native modules' claim.
Instruction Scope
Runtime instructions and plugin code read/write many workspace files (SESSION-STATE.md, MEMORY.md, memory/*), which is expected for a memory plugin. However the SKILL.md and code also direct network operations (ollama endpoints, model pulls) and reference environment variables (OLLAMA_URL, EMBEDDING_MODEL, OLLAMA_HOST) that are not declared in metadata. The plugin auto-injects memory content via a before_agent_start hook — expected for a memory tool, but it means prompts are used as search queries automatically (privacy/behavior implication).
Install Mechanism
The skill lists no install spec, yet contains package.json and package-lock.json and explicitly tells users to run npm install. The ONNX embedding component uses execSync to run curl against a third‑party mirror (hf-mirror.com) to download model files — a high-risk download mechanism. The presence of optional native binaries in the lockfile (@lancedb platform-specific packages, apache-arrow) contradicts the 'no native modules' promise.
Credentials
skill metadata declares no required environment variables, but the code relies on OLLAMA_URL / EMBEDDING_MODEL and the Kimi embedding module will require KIMI_API_KEY (it throws if not present). KimiEmbedding also uses axios at an absolute path (/usr/lib/node_modules/openclaw/node_modules/axios) which is an unexpected reference to the host filesystem. These undeclared env/credential requirements and absolute-path dependency are disproportionate and puzzling.
Persistence & Privilege
always:false (normal). The plugin registers tools and a before_agent_start handler that can autonomously inject memory into prompts — this is expected for a memory plugin and increases its operational reach, but by itself is not a misconfiguration. No evidence the skill modifies other skills or global agent config beyond normal plugin registration.
What to consider before installing
This skill contains multiple inconsistencies you should understand before installing:
- The README/SKILL.md claim 'no native modules or external APIs', but the code requires or optionally uses: a local Ollama server (network), npm packages (including platform-specific optional native binaries), and an ONNX model downloaded via curl from a third-party mirror. If you need strictly offline/pure-JS behavior, this skill does not guarantee that.
- KimiEmbedding references an absolute axios path (/usr/lib/node_modules/openclaw/node_modules/axios) and will attempt to call an external API if configured (KIMI_API_KEY). That absolute require is fragile and suspicious — it assumes host filesystem layout and could fail or be abused.
- The ONNX embedding code performs network downloads with execSync/curl. Downloads from arbitrary URLs can supply malicious binaries or model files; inspect the download target and prefer official release hosts.
- The plugin will write files into whatever directory you run it from (SESSION-STATE.md, memory/). Run initialization in an isolated or empty directory to avoid accidental overwrites of important files.
- The plugin auto-injects memories into agent prompts (before_agent_start). If you enable it, be aware that stored memories will be automatically surfaced to agents; disable autoRecall/autoCapture in config if you want tighter control.
Recommendations before installing:
1. Review package.json/package-lock and the code paths that call curl/execSync and the KimiEmbedding implementation. Ask the author why absolute axios path is used.
2. Run npm install and initial tests in a sandboxed folder or container, not your primary home/work folder.
3. If you want truly offline/no-native behavior, prefer using only the purejs/simple embedding components and avoid running the ONNX or Ollama instructions.
4. If you plan to allow network downloads or run Ollama, verify the model sources are trusted and consider pinning checksums.
5. If you are uncomfortable with the undeclared env usage (KIMI_API_KEY, OLLAMA_*), do not enable the plugin globally; keep it user-invocable and disable autoRecall.
Given these contradictions and the presence of networked downloads and absolute-path references, treat this skill as suspicious until the author clarifies the intended execution model and removes or documents external/network/native requirements.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.1.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🧠 Clawdis
SKILL.md
Elite Longterm Memory (Local Edition) 🧠
基于 LanceDB + Pure JavaScript Embedding 的本地向量记忆系统,无需外部 API。
核心特性
- ✅ 纯本地运行 — Pure JavaScript embedding,零外部依赖
- ✅ WAL 协议 — 写前日志,防数据丢失
- ✅ LanceDB 向量搜索 — 语义召回相关记忆
- ✅ 三层存储 — Hot/Warm/Cold 分层管理
- ✅ 无需配置 — 无需 Ollama 或 OpenAI API key
- ✅ 自动召回/捕获 — 智能注入相关上下文
架构
┌─────────────────────────────────────────────────────────────────┐
│ ELITE LONGTERM MEMORY │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ HOT RAM │ │ WARM STORE │ │ COLD STORE │ │
│ │ │ │ │ │ │ │
│ │ SESSION- │ │ LanceDB │ │ Git-Notes │ │
│ │ STATE.md │ │ Vectors │ │ Knowledge │ │
│ │ │ │ │ │ Graph │ │
│ │ (survives │ │ (semantic │ │ (permanent │ │
│ │ compaction)│ │ search) │ │ decisions) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ └────────────────┼────────────────┘ │
│ ▼ │
│ ┌─────────────┐ │
│ │ MEMORY.md │ ← Curated long-term │
│ │ + daily/ │ (human-readable) │
│ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
五层记忆系统
| 层级 | 文件/系统 | 用途 | 持久化 |
|---|---|---|---|
| 1. Hot RAM | SESSION-STATE.md | 活跃任务上下文 | survived compaction |
| 2. Warm Store | LanceDB Vectors | 语义搜索 | 自动召回 |
| 3. Cold Store | Git-Notes | 结构化决策 | 永久保存 |
| 4. Archive | MEMORY.md + daily/ | 人类可读 | 精选归档 |
| 5. Embedding | Ollama | 本地向量模型 | 纯本地 |
快速开始
1. 安装依赖
# 确保 Ollama 已安装并运行
ollama --version
# 拉取向量模型
ollama pull nomic-embed-text
# 安装 Node 依赖
cd skills/elite-longterm-memory
npm install
2. 初始化记忆系统
node bin/init.js
这会创建:
SESSION-STATE.md— 热内存MEMORY.md— 长期记忆memory/— 每日日志目录memory/vectors/— LanceDB 数据库
3. 使用记忆工具
# 存储记忆
node bin/memory.js store "用户喜欢深色模式" --importance 0.9 --category preference
# 搜索记忆
node bin/memory.js search "用户界面偏好"
# 查看统计
node bin/memory.js stats
# 删除记忆
node bin/memory.js forget --query "深色模式"
OpenClaw 集成
启用插件
在 ~/.openclaw/openclaw.json 中添加:
{
"plugins": {
"entries": {
"elite-longterm-memory": {
"enabled": true,
"config": {
"ollamaUrl": "http://localhost:11434",
"embeddingModel": "nomic-embed-text",
"dbPath": "./memory/vectors",
"autoRecall": true,
"autoCapture": false
}
}
}
}
}
使用记忆工具
启用后,OpenClaw 会自动提供以下工具:
memory_recall— 搜索相关记忆memory_store— 存储重要信息memory_forget— 删除记忆
智能提示词
在 AGENTS.md 或 SOUL.md 中添加:
## 记忆协议
### 会话开始时
1. 读取 SESSION-STATE.md — 获取热上下文
2. 使用 memory_recall 搜索相关历史
3. 检查 memory/YYYY-MM-DD.md 了解近期活动
### 对话中
- 用户给出具体细节?→ 先写入 SESSION-STATE.md,再回复
- 重要决策?→ 使用 memory_store 存储
- 表达偏好?→ memory_store --importance 0.9 --category preference
### 会话结束时
1. 更新 SESSION-STATE.md 最终状态
2. 重要内容移至 MEMORY.md
3. 创建/更新 memory/YYYY-MM-DD.md
WAL 协议(关键)
写前日志:先写状态,再回复。
| 触发条件 | 动作 |
|---|---|
| 用户表达偏好 | 写入 SESSION-STATE.md → 然后回复 |
| 用户做出决策 | 写入 SESSION-STATE.md → 然后回复 |
| 用户给出期限 | 写入 SESSION-STATE.md → 然后回复 |
| 用户纠正你 | 写入 SESSION-STATE.md → 然后回复 |
为什么? 如果先回复再保存,崩溃/压缩会导致上下文丢失。WAL 确保数据持久。
维护命令
# 查看向量记忆统计
node bin/memory.js stats
# 搜索所有记忆
node bin/memory.js search "*" --limit 50
# 清理重复记忆
node bin/memory.js dedup
# 导出记忆
node bin/memory.js export --format json > memories.json
# 备份记忆
node bin/memory.js backup ./backups/memory-$(date +%Y%m%d).zip
故障排查
Ollama 连接失败
→ 检查 ollama serve 是否运行
→ 检查 OLLAMA_HOST 环境变量
向量搜索无结果
→ 检查 LanceDB 路径是否正确
→ 确认已存储记忆:node bin/memory.js stats
内存占用过高
→ 运行 node bin/memory.js compact 压缩向量
→ 清理旧记忆:node bin/memory.js cleanup --before 30d
为什么本地 Embedding?
| 对比 | OpenAI API | Ollama 本地 |
|---|---|---|
| 费用 | 按 token 收费 | 免费 |
| 延迟 | 网络依赖 | 本地毫秒级 |
| 隐私 | 数据出域 | 完全本地 |
| 离线 | 不可用 | 可用 |
| 质量 | text-embedding-3 | nomic-embed-text |
对于个人使用,nomic-embed-text 的质量足够,且完全免费。
链接
- Ollama: https://ollama.com
- LanceDB: https://lancedb.github.io (npm:
vectordb) - nomic-embed-text: https://ollama.com/library/nomic-embed-text
本地优先,隐私至上。
Files
16 totalSelect a file
Select a file to preview.
Comments
Loading comments…
