Install
openclaw skills install openclaw-memvid-loggerClawHub Security found sensitive or high-impact capabilities. Review the scan results before using.
Logs all OpenClaw conversations and events with role tags, saving to JSONL and Memvid for full context search and monthly sharded or single-file storage.
openclaw skills install openclaw-memvid-loggerVersion: 1.2.5 (Critical Fixes Edition)
Author: stackBlock
License: MIT
OpenClaw: >= 2026.2.12
A dual-output conversation logger for OpenClaw that captures everything - user messages, assistant responses, sub-agent conversations, tool calls, and system events - to both JSONL (backup) and Memvid (semantic search) formats.
Memvid: A single-file memory layer for AI agents with instant retrieval and long-term memory. Persistent, versioned, and portable memory, without databases.
"Replace complex RAG pipelines with a single portable file you own, and give your agent instant retrieval and long-term memory."
Before installing, please understand:
This skill captures everything - by design. It logs all user messages, assistant responses, sub-agent conversations, tool outputs, and system events to local files. This enables powerful long-term memory but requires trust.
What you should know:
MEMVID_API_KEY sends data to memvid.com (third-party service). Free/local modes keep data on your machine only.Mitigations available:
tools/log.py before installing to understand exactly what gets loggedchmod 600)This skill is for users who want complete conversation memory and accept the privacy trade-offs.
KEY=VALUE format for Memvid 2.0+ compatibility
--tag "user,telegram"--tag "role=user" --tag "source=telegram"/etc/environment instructions (.bashrc doesn't work for background services).js) requirement for OpenClaw 2026.2.12+--mode neural as default for maximum accuracyBest for: Heavy users, unified search across everything
Cost: $59-299/month via memvid.com
# 1. Get API key from memvid.com ($59/month for 1GB, $299 for 25GB)
export MEMVID_API_KEY="your_api_key_here"
export MEMVID_MODE="single"
# 2. Install
npm install -g memvid
git clone https://github.com/stackBlock/openclaw-memvid-logger.git
cp -r openclaw-memvid-logger ~/.openclaw/workspace/skills/
# 3. Create unified memory file
memvid create ~/memory.mv2
# 4. Start OpenClaw - everything logs to one searchable file
Search everything at once:
memvid ask memory.mv2 "What did we discuss about BadjAI?"
memvid ask memory.mv2 "What did the researcher agent find about Tesla?"
memvid ask memory.mv2 "Show me all the Python scripts I asked for"
Best for: Testing, light usage, single searchable file
Cost: FREE
# 1. Install (no API key needed)
npm install -g memvid
git clone https://github.com/stackBlock/openclaw-memvid-logger.git
cp -r openclaw-memvid-logger ~/.openclaw/workspace/skills/
export MEMVID_MODE="single"
# 2. Create memory file
memvid create ~/memory.mv2
# 3. Start OpenClaw
⚠️ Limit: 50MB (~5,000 conversation turns). When you hit it:
Best for: Long-term use, staying under free tier
Cost: FREE
Trade-off: Multi-file search
# 1. Install (no API key needed)
npm install -g memvid
git clone https://github.com/stackBlock/openclaw-memvid-logger.git
cp -r openclaw-memvid-logger ~/.openclaw/workspace/skills/
export MEMVID_MODE="monthly" # This is the default
# 2. Start OpenClaw - auto-creates monthly files
How it works:
memory_2026-02.mv2 (February)memory_2026-03.mv2 (March - auto-created)⚠️ Sharding Search Differences:
Single-file search (API/Free modes):
# One search gets everything
memvid ask memory.mv2 "What car did I decide to buy?"
# Returns: Results from ALL conversations across ALL time
Sharding search (requires multiple queries):
# Must search each month separately
memvid ask memory_2026-02.mv2 "car decision" # Recent
memvid ask memory_2026-01.mv2 "car decision" # January
# Or use a wrapper script to search all files
for file in memory_*.mv2; do
echo "=== $file ==="
memvid ask "$file" "car decision" 2>/dev/null | head -5
done
# You must know which month the conversation happened
# No cross-month context - "compare this month to last month" won't work
Why sharding is harder:
| Role | Tag | Example Search |
|---|---|---|
| User | [user] | "What did I say about Mercedes?" |
| Assistant | [assistant] | "What did you recommend?" |
| Sub-agents | [agent:researcher], [agent:coder] | "What did the researcher find?" |
| System | [system] | "When did the cron job run?" |
| Tools | [tool:exec], [tool:browser] | "What commands were run?" |
┌─────────────────────────────────────────┐
│ OpenClaw Ecosystem │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ User │ │Assistant│ │ Agents │ │
│ │ Messages│ │Responses│ │Research │ │
│ └────┬────┘ └────┬────┘ └────┬────┘ │
│ └─────────────┴─────────────┘ │
│ │ │
│ ┌──────▼──────┐ │
│ │ log.py │ │
│ │ (this skill)│ │
│ └──────┬──────┘ │
└─────────────────────┼───────────────────┘
│
┌─────────────────┼─────────────────┐
↓ ↓ ↓
┌───────┐ ┌─────────────┐ ┌──────────┐
│ JSONL │ │ Memvid │ │ Search │
│ File │ │ Files │ │ Query │
└───────┘ └─────────────┘ └──────────┘
│ │
↓ ↓
grep/jq memvid ask/find
# What did you say about...?
memvid ask memory_2026-02.mv2 "What was your recommendation about the Mercedes vs Tesla?"
# What did I ask for...?
memvid ask memory_2026-02.mv2 "What Python scripts did I request last week?"
# What did agents do...?
memvid ask memory_2026-02.mv2 "What did the researcher agent find about options trading?"
# System events...?
memvid ask memory_2026-02.mv2 "When did the PowerSchool grades cron job run?"
# Find specific terms
memvid find memory_2026-02.mv2 --query "Mercedes"
# With filters
memvid find memory_2026-02.mv2 --query "script" --tag agent:coder
memvid when memory_2026-02.mv2 "yesterday"
memvid when memory_2026-02.mv2 "last Tuesday"
memvid when memory_2026-02.mv2 "3 days ago"
Memvid has three search modes. This skill uses --mode neural by default for maximum accuracy:
# Always use neural for semantic understanding and context
memvid ask memory.mv2 "What supplements did Dr. Sinclair recommend?" --mode neural
memvid ask memory.mv2 "What did we discuss about BadjAI?" --mode neural
memvid ask memory.mv2 "Show me the Python scripts I requested" --mode neural
Speed: ~200ms | Best for: Semantic understanding, context, synonyms, conceptual relationships
Mode 1: Lexical Search (Fastest)
# Use only for exact keyword matching when speed is critical
memvid find memory.mv2 --mode lex --query "metformin"
Speed: ~8ms | Use when: Exact word matching needed, latency is critical
Mode 2: Hybrid Search (Balanced)
# Combines lexical + neural
memvid find memory.mv2 --mode hybrid --query "diabetes medications"
Speed: ~300-500ms | Use when: You want both exact matches and semantic similarity
| Mode | Speed | Accuracy | Use Case |
|---|---|---|---|
neural | ~200ms | Highest | Default - semantic understanding |
lex | ~8ms | Keyword only | Speed-critical exact matches |
hybrid | ~300-500ms | High | Balanced approach |
The ~200ms trade-off is worth it: Neural mode understands context, handles paraphrases, and finds conceptually related information that lexical search misses entirely.
# Quick grep
grep "Mercedes" conversation_log.jsonl
# Complex queries with jq
jq 'select(.role_tag == "user" and .content | contains("Python"))' conversation_log.jsonl
# Time range
jq 'select(.timestamp >= "2026-02-01" and .timestamp < "2026-03-01")' conversation_log.jsonl
| Variable | Default | Mode | Description |
|---|---|---|---|
MEMVID_API_KEY | (none) | API | Your memvid.com API key |
MEMVID_MODE | monthly | All | single or monthly |
JSONL_LOG_PATH | ~/workspace/conversation_log.jsonl | All | Backup log file |
MEMVID_PATH | ~/workspace/memory.mv2 | All | Base path for memory files |
MEMVID_BIN | ~/.npm-global/bin/memvid | All | Path to memvid CLI |
Add to openclaw.json:
{
"hooks": {
"internal": {
"enabled": true,
"entries": {
"conversation-logger": {
"enabled": true,
"command": "python3 ~/.openclaw/workspace/skills/unified-logger/tools/log.py"
}
}
}
}
}
memory.mv2
├── [user] messages
├── [assistant] responses
├── [agent:researcher] findings
├── [agent:coder] code
├── [tool:exec] commands
└── [system] events
memory_2026-01.mv2 (January conversations)
memory_2026-02.mv2 (February conversations) ← Current
memory_2026-03.mv2 (March, auto-created on March 1)
# Option 1: Archive and start fresh
mv memory.mv2 memory_archive.mv2
memvid create memory.mv2
# Option 2: Switch to monthly sharding
export MEMVID_MODE="monthly"
# Option 3: Get API key
export MEMVID_API_KEY="your_key" # $59-299/month at memvid.com
Current month's file auto-creates. If missing:
memvid create memory_$(date +%Y-%m).mv2
Agents log to their own sessions. Ensure skill is installed in main agent workspace and sub-agents inherit it.
Memvid uses semantic search. Be specific:
| Feature | API Mode | Free Mode | Sharding Mode |
|---|---|---|---|
| Cost | $59-299/mo | FREE | FREE |
| Capacity | 1-25GB+ | 50MB | Unlimited (files) |
| Files | 1 | 1 | Multiple (monthly) |
| Unified Search | ✅ Yes | ✅ Yes | ❌ Per-file only |
| Cross-Context Search | ✅ Full history | ✅ Full history | ❌ Month isolated |
| Best For | Power users | Testing | Long-term free use |
| Complexity | Simple | Simple | Must track files |
The situation: Memvid's pricing goes from $0 (50MB) straight to $59/month (25GB).
The problem: That's like buying a Ferrari when you just need a Honda Civic for your commute.
What we're doing about it:
I reached out. While they consider it, Sharding Mode exists so you don't have to pay Ferrari prices for Honda Civic usage.
You can help:
If you also think $0 → $59 is a bit much, reach out to Memvid at memvid.com and tell them stackBlock sent you. The more voices, the faster we get that $10-20 middle tier for the rest of us.
Until then: Sharding Mode. Because startups shouldn't have to choose between ramen and memory. 🍜
MIT - See LICENSE
About Memvid:
Memvid is a single-file memory layer for AI agents with instant retrieval and long-term memory. Persistent, versioned, and portable memory, without databases.
Replace complex RAG pipelines with a single portable file you own, and give your agent instant retrieval and long-term memory.