Skill flagged — review recommended

ClawHub Security found sensitive or high-impact capabilities. Review the scan results before using.

Local Vector Memory

v1.0.0

Store, search, and manage local vector memories using Ollama embeddings with Qdrant, supporting Chinese and English text without cloud dependencies.

0· 99· 1 versions· 1 current· 1 all-time· Updated 17h ago· MIT-0
byCong Pendy@jancong

Install

openclaw skills install local-vector-memory

Local Vector Memory Skill

Zero-cloud vector memory using Ollama embeddings + Qdrant local storage.

Prerequisites

# Ollama with embedding model
ollama pull qwen3-embedding:4b

# Install the package
pip install local-vector-memory

Quick Reference

lvm init                    # Initialize database
lvm add "text to remember"  # Store a memory
lvm search "query"          # Semantic search
lvm search "query" --limit 3 --json  # Structured output
lvm stats                   # Show stats
lvm reindex --dir ~/notes   # Reindex markdown files
lvm delete "source_name"    # Delete by source

Python Library Usage

from local_vector_memory.core import LocalVectorMemory

lvm = LocalVectorMemory()  # uses env defaults
lvm.add("OpenClaw baseUrl must not end with /v1")
results = lvm.search("how to configure ollama")
for r in results:
    print(f"[{r['score']}] {r['source']}: {r['text'][:100]}")

Configuration

Env VarDefaultDescription
LVM_OLLAMA_URLhttp://localhost:11434Must be localhost (SSRF protected)
LVM_MODELqwen3-embedding:4bEmbedding model
LVM_DIMS2560Vector dimensions
LVM_DB_PATH~/.local-vector-memory/qdrantStorage path
LVM_CHUNK_SIZE400Chunk size in chars
LVM_CHUNK_OVERLAP50Overlap between chunks

Embedding Model Selection

ModelDimsSizeChinese Hit RateBest For
qwen3-embedding:4b2560~2.5GB100%Chinese/English mixed
bge-m31024~570MB40%Multilingual, low RAM
nomic-embed-text768274MB30%English-only, minimal RAM

Integration Patterns

With OpenClaw

Add to HEARTBEAT.md or cron for periodic reindexing:

lvm reindex --dir ~/.openclaw/workspace/memory

As a backup search layer

When memory_search doesn't find what you need:

lvm search "query" --json

Security

  • Ollama URL restricted to localhost only (SSRF protection)
  • Path traversal blocked in reindex glob patterns
  • Input length limits enforced (100K text, 10K query)
  • All data stored locally, no network calls except to local Ollama

Links

Version tags

latestvk9799srefbe4vn8hwx20scm3z584bx7x