Install
openclaw skills install local-vector-memoryStore, search, and manage local vector memories using Ollama embeddings with Qdrant, supporting Chinese and English text without cloud dependencies.
openclaw skills install local-vector-memoryZero-cloud vector memory using Ollama embeddings + Qdrant local storage.
# Ollama with embedding model
ollama pull qwen3-embedding:4b
# Install the package
pip install local-vector-memory
lvm init # Initialize database
lvm add "text to remember" # Store a memory
lvm search "query" # Semantic search
lvm search "query" --limit 3 --json # Structured output
lvm stats # Show stats
lvm reindex --dir ~/notes # Reindex markdown files
lvm delete "source_name" # Delete by source
from local_vector_memory.core import LocalVectorMemory
lvm = LocalVectorMemory() # uses env defaults
lvm.add("OpenClaw baseUrl must not end with /v1")
results = lvm.search("how to configure ollama")
for r in results:
print(f"[{r['score']}] {r['source']}: {r['text'][:100]}")
| Env Var | Default | Description |
|---|---|---|
LVM_OLLAMA_URL | http://localhost:11434 | Must be localhost (SSRF protected) |
LVM_MODEL | qwen3-embedding:4b | Embedding model |
LVM_DIMS | 2560 | Vector dimensions |
LVM_DB_PATH | ~/.local-vector-memory/qdrant | Storage path |
LVM_CHUNK_SIZE | 400 | Chunk size in chars |
LVM_CHUNK_OVERLAP | 50 | Overlap between chunks |
| Model | Dims | Size | Chinese Hit Rate | Best For |
|---|---|---|---|---|
qwen3-embedding:4b | 2560 | ~2.5GB | 100% | Chinese/English mixed |
bge-m3 | 1024 | ~570MB | 40% | Multilingual, low RAM |
nomic-embed-text | 768 | 274MB | 30% | English-only, minimal RAM |
Add to HEARTBEAT.md or cron for periodic reindexing:
lvm reindex --dir ~/.openclaw/workspace/memory
When memory_search doesn't find what you need:
lvm search "query" --json