Install
openclaw skills install fast-unified-memoryProvides a high-performance unified memory combining file-based OpenClaw storage with semantic vector search using local Ollama embeddings for fast, private...
openclaw skills install fast-unified-memoryA high-performance unified memory system that integrates OpenClaw memory with semantic memory storage using Ollama's nomic-embed-text model for ultra-fast embeddings.
This skill provides a unified memory layer that combines:
nomic-embed-text model pulled: ollama pull nomic-embed-text# Install Ollama first
curl -fsSL https://ollama.ai/install.sh | sh
# Pull the embedding model
ollama pull nomic-embed-text
# Start Ollama
ollama serve
# Search both memory systems
node fast-unified-memory.js search "your query"
# Add a memory
node fast-unified-memory.js add "User prefers concise responses"
# List all memories
node fast-unified-memory.js list
# Show system stats
node fast-unified-memory.js stats
┌─────────────────────────────────────────────┐
│ FAST UNIFIED MEMORY │
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ OpenClaw │ │ Semantic │ │
│ │ Memory │ │ Memory │ │
│ │ (files) │ │ (vectors) │ │
│ └─────────────┘ └─────────────┘ │
│ ↓ ↓ │
│ [Keyword Match] [Cosine Similarity] │
│ │
│ Unified Results (ranked) │
└─────────────────────────────────────────────┘
| Metric | Value |
|---|---|
| Embedding generation | ~40ms |
| Vector search | ~50ms |
| File search | ~40ms |
| Total search | ~130ms |
The skill uses these defaults:
http://localhost:11434nomic-embed-text~/.mem0/fast-store.json~/.openclaw/workspace/memory/fast-unified-memory.js - Main CLI toolSKILL.md - This documentationOllama not running:
ollama serve
Model not found:
ollama pull nomic-embed-text
Port conflict:
The skill assumes Ollama is on port 11434. Update the OLLAMA_URL constant if using a different port.
MIT