Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Local Vector Memory
v1.0.0Store, search, and manage local vector memories using Ollama embeddings with Qdrant, supporting Chinese and English text without cloud dependencies.
⭐ 0· 56·0 current·0 all-time
byCong Pendy@jancong
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description match the SKILL.md: it documents using Ollama embeddings and a Qdrant datastore, provides CLI usage (lvm) and a Python API, and lists relevant config env vars. Minor inconsistency: registry metadata shows no homepage/source but SKILL.md references PyPI and GitHub; this is plausible but worth verifying.
Instruction Scope
SKILL.md instructs indexing local directories (e.g., 'lvm reindex --dir ~/notes' and recommending cron/HEARTBEAT.md reindex of ~/.openclaw/workspace/memory). Reindexing arbitrary paths is intrinsic to the skill but can read and store sensitive local files if misconfigured. The doc also claims SSRF protection by restricting Ollama URL to localhost, but that is a declarative claim — the skill provides no enforcement mechanism (it's an instruction-only skill).
Install Mechanism
No install spec in the registry (instruction-only). The SKILL.md instructs 'pip install local-vector-memory' and 'ollama pull qwen3-embedding:4b'. Installing a third-party PyPI package is expected for this functionality but introduces supply-chain risk: the package and its code should be reviewed/trusted before pip install. Downloading large embedding models via 'ollama pull' is normal for local embedding but depends on the Ollama provider/source.
Credentials
The skill requires no credentials in the registry. SKILL.md lists several configuration env vars (LVM_OLLAMA_URL, LVM_DB_PATH, etc.) which are reasonable for a local vector store. However, if LVM_OLLAMA_URL is changed from localhost to an external endpoint, embeddings/contents could be transmitted off-host — the documentation's 'localhost only' guidance must be enforced by the operator.
Persistence & Privilege
always:false (no forced presence) and normal model invocation are set. The skill suggests adding periodic reindexing (cron/heartbeat), which is expected for a memory tool but increases exposure if misconfigured; this is a usage risk rather than a privilege mis-declaration in the skill metadata.
What to consider before installing
This skill appears to do what it says (local vector memory using Ollama + Qdrant) but it relies on installing a third-party PyPI package and reindexing local directories, which can expose sensitive files if misused. Before installing or running: 1) verify the PyPI project and GitHub repo (maintainer, recent activity, inspect source) rather than blindly pip installing; 2) run installation inside an isolated environment (virtualenv/container) and review what files the package writes; 3) ensure LVM_OLLAMA_URL is bound to localhost and not reachable remotely; do not set it to a public endpoint; 4) restrict reindex targets (explicit directories you control) and avoid system, home dotfiles, or secret stores; 5) prefer running 'lvm' manually first to confirm behavior rather than enabling automated reindex/cron; and 6) if you need stronger assurance, review the package code or run it in a sandboxed VM before allowing the agent to invoke it autonomously.Like a lobster shell, security has layers — review code before you run it.
latestvk9799srefbe4vn8hwx20scm3z584bx7x
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
