LLM Memory Integration

v1.0.9

LLM + 向量模型集成方案。支持任意 LLM + Embedding 模型,用户自行配置。支持混合检索、智能路由、渐进式启用、用户画像自动更新。

0· 6·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (LLM + vector memory integration) matches the files and scripts present (embedding, llm engine, search, maintenance, persona updater, etc.). Required binaries (python3, sqlite3) and the declared env var (EMBEDDING_API_KEY, optional LLM_API_KEY) are reasonable for the stated functionality.
Instruction Scope
SKILL.md and scripts instruct the agent/user to run many local scripts that read/write ~/.openclaw workspace files and the local vectors.db; they also perform network calls to user-configured LLM/embedding endpoints. This is within scope for a memory integration, but it means the skill will access user memory, persona.md and conversation data — review that data before consenting and be aware API calls will send content to whichever endpoints you configure.
Install Mechanism
No install spec is provided (instruction-only with included scripts), so nothing will be automatically downloaded or executed from external URLs. Code is present in the bundle and runs locally; no external installers or archive downloads were found.
Credentials
The skill requires an embedding API key (EMBEDDING_API_KEY) and optionally an LLM API key — this is proportionate to calling external embedding/LLM services. The repository and config indicate read/write access to ~/.openclaw memory files and vectors.db, which is necessary but sensitive (these files may contain PII).
Persistence & Privilege
The skill does not set always:true and does not auto-install cron jobs (it provides maintenance_cron.txt but defers manual setup). It will create and update files under ~/.openclaw (logs, cache, persona.md, vectors.db updates). That file access is expected but persistent and should be reviewed by the user.
Assessment
This package appears coherent for a local LLM+vector memory integration, but it will read and write your local memory database (~/.openclaw/memory-tdai/vectors.db), persona.md, caches and logs, and will send text to whatever embedding/LLM endpoints you configure. Things to check before installing: - Verify and supply only trusted embedding/LLM endpoints and keys; any text sent there may be stored or logged by the provider. - Inspect or vet the SQLite vector extension file (~/.openclaw/extensions/...) before allowing the skill to load it — a native .so can execute arbitrary code at load time. The project itself warns to verify the extension source. - Review the persona and vectors DB for sensitive data; the skill will update persona.md and write vectors into vectors.db. - The scripts run subprocesses and the sqlite3 CLI to query/load extensions; while implemented using parameterized calls (no shell=True), ensure the filesystem paths used are expected and not symlinked to sensitive locations. - If you need stronger assurance, run the scripts in an isolated environment (container or VM) and examine the full omitted files (35 files truncated here) before granting access to your production workspace. I have medium confidence in this assessment because the code and SKILL.md are internally consistent, but the presence of native extension loading and broad read/write to a local memory DB are sensitive and merit user review.

Like a lobster shell, security has layers — review code before you run it.

embeddingvk97fweayswym30yds0wbjx0dyh84bq9hhybridvk97fweayswym30yds0wbjx0dyh84bq9hlatestvk97fweayswym30yds0wbjx0dyh84bq9hllmvk97fweayswym30yds0wbjx0dyh84bq9hmemoryvk97fweayswym30yds0wbjx0dyh84bq9hrrfvk97fweayswym30yds0wbjx0dyh84bq9hsearchvk97fweayswym30yds0wbjx0dyh84bq9hvectorvk97fweayswym30yds0wbjx0dyh84bq9h

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments