Install
openclaw skills install hurttlocker-cortexLocal-first agent memory with Ebbinghaus decay, hybrid search, and MCP tools. Import files, extract facts, search with BM25 + semantic, track confidence over time. Zero dependencies, single Go binary, SQLite storage. Use when you need persistent memory beyond OpenClaw's built-in MEMORY.md — especially for multi-agent setups, large knowledge bases, or when compaction keeps losing your important context. Don't use for: conversation history (use memory_search), exact string matching (use ripgrep), or web lookups.
openclaw skills install hurttlocker-cortexThe memory layer OpenClaw should have built in.
Cortex is an open-source, import-first memory system for AI agents. Single Go binary, SQLite storage, zero cloud dependencies. It solves the #1 complaint about OpenClaw: agents forget everything after compaction.
GitHub: https://github.com/hurttlocker/cortex
Install: brew install hurttlocker/cortex/cortex or download from Releases
OpenClaw's default memory is Markdown files. When context fills up, compaction summarizes and destroys specifics. Cortex fixes this:
| Problem | Cortex Solution |
|---|---|
| Compaction loses details | Persistent SQLite DB survives any session |
| No search — just dump files into context | Hybrid BM25 + semantic search (~16ms keyword, ~52ms semantic) |
| Everything has equal weight | Ebbinghaus decay — important facts stay, noise fades naturally |
| Can't import existing files | Import-first: Markdown, text, any file. 8 connectors (GitHub, Gmail, Calendar, Drive, Slack, Notion, Discord, Telegram) |
| Multi-agent memory leaks | Per-agent scoping built in |
| Expensive cloud memory services | $0/month. Forever. Local SQLite. |
# macOS/Linux (Homebrew)
brew install hurttlocker/cortex/cortex
# Or download binary directly
# https://github.com/hurttlocker/cortex/releases/latest
# Import OpenClaw's memory files
cortex import ~/clawd/memory/ --extract
# Import specific files
cortex import ~/clawd/MEMORY.md --extract
cortex import ~/clawd/USER.md --extract
# Fast keyword search
cortex search "wedding venue" --limit 5
# Semantic search (requires ollama with nomic-embed-text)
cortex search "what decisions did I make about the project" --mode semantic
# Hybrid (recommended)
cortex search "trading strategy" --mode hybrid
# Add to your MCP config — Cortex exposes 17 tools + 4 resources
cortex mcp # stdio mode
cortex mcp --port 8080 # HTTP+SSE mode
Facts decay at different rates based on type. Identity facts (names, roles) last ~2 years. Temporal facts (events, dates) fade in ~1 week. State facts (status, mood) fade in ~2 weeks. This means search results naturally prioritize what matters — without manual curation.
Every imported file gets facts extracted automatically:
Pull memory from external sources:
cortex connect sync --provider github --extract
cortex connect sync --provider gmail --extract
cortex connect sync --all --extract
Explore your memory visually:
cortex graph --serve --port 8090
# Opens interactive 2D graph explorer in browser
cortex cleanup --purge-noise # Remove garbage + duplicates
cortex stale 30 # Find facts not accessed in 30 days
cortex conflicts # Detect contradictions
cortex conflicts --resolve llm # Auto-resolve with LLM
memory_search → Cortex → QMD → ripgrep → web search
Use OpenClaw's built-in memory_search for conversation history, then Cortex for deep knowledge retrieval.
The included scripts/cortex.sh provides shortcuts:
scripts/cortex.sh search "query" 5 # Hybrid search
scripts/cortex.sh stats # Memory health
scripts/cortex.sh stale 30 # Stale fact detection
scripts/cortex.sh conflicts # Contradiction detection
scripts/cortex.sh sync # Incremental import
scripts/cortex.sh reimport # Full wipe + re-import
scripts/cortex.sh compaction # Pre-compaction state brief
# Auto-import sessions + sync connectors every 30 min
cortex connect schedule --every 30m --install
| Cortex | Mem0 | Zep | LangMem | |
|---|---|---|---|---|
| Deploy | Single binary | Cloud or K8s | Cloud | Python lib |
| Cost | $0 | $19-249/mo | $25/mo+ | Infra costs |
| Privacy | 100% local | Cloud by default | Cloud | Depends |
| Decay | Ebbinghaus (7 rates) | TTL only | Temporal | None |
| Import | Files + 8 connectors | Chat extraction | Chat/docs | Chat extraction |
| Search | BM25 + semantic | Vector + graph | Temporal KG | JSON docs |
| MCP | 17 tools native | No | No | No |
| Dependencies | Zero | Python + cloud | Cloud + credits | Python + LangGraph |
nomic-embed-text for semantic searchcortex answer vs cortex searchAdd to ~/.cortex/config.yaml:
search:
source_boost:
- prefix: "memory/"
weight: 1.5
- prefix: "file:MEMORY"
weight: 1.6
- prefix: "github"
weight: 1.3
- prefix: "session:"
weight: 0.9
Higher weight = more trusted. Daily notes and core files rank above auto-imported sessions.
Use --intent when you know where the answer lives:
--intent memory — personal decisions, preferences, people--intent connector — code, PRs, emails, external data--intent import — imported files and documents# Nightly dry-run + apply (launchd or cron)
cortex lifecycle run --dry-run > /tmp/lifecycle-plan.log 2>&1
# If anything found, apply:
cortex lifecycle run
Recommended: 3:30 AM daily. First week: dry-run only, review logs.
Fresh agent (< 500 facts):
policies:
reinforce_promote:
min_reinforcements: 3
min_sources: 2
decay_retire:
inactive_days: 90
confidence_below: 0.25
conflict_supersede:
min_confidence_delta: 0.20
Mature agent (2000+ facts):
policies:
reinforce_promote:
min_reinforcements: 5
min_sources: 3
decay_retire:
inactive_days: 45
confidence_below: 0.35
conflict_supersede:
min_confidence_delta: 0.10
After any bulk import, run:
cortex cleanup --dedup-facts # Remove near-duplicates
cortex conflicts --auto-resolve # Resolve contradictions
memory_search → cortex answer (synthesis) → cortex search (pointers) → QMD → ripgrep → web