Install
openclaw skills install duru-memoryMarkdown-based memory continuity system for agents using local Markdown files as the primary memory source. Use when building or operating local Markdown memory files (daily logs, project memory, handoff notes), maintaining current-state records, running session start/close memory protocols, and pairing a structured Markdown workflow with OpenClaw's built-in memory tools.
openclaw skills install duru-memoryUse this skill to run a high-traceability memory system based on plain Markdown files.
This skill is complementary to OpenClaw's built-in memory stack:
duru-memory manages the Markdown files, conventions, and maintenance workflowmemory_search and memory_get are the primary recall/read tools during normal assistant operationmemory-core indexes and retrieves from the same Markdown treeactive-memory is an optional pre-reply recall layer for eligible interactive sessions, not a guaranteed default for every deploymentTreat the Markdown files as the source of truth, and OpenClaw's built-in memory stack as the main retrieval and recall layer.
memory/CORE/hard-rules.md and memory/CORE/current-state.md.memory/daily/ (default: last 2 days).memory_search first.scripts/memory-search.sh "<query>" when you want explicit file-level control, debugging, maintenance, or a second opinion against built-in recall.current-state.md when project/task state changes.state-changelog.md for every meaningful state update.scripts/session-close.sh.memory/
CORE/
hard-rules.md
current-state.md
state-changelog.md
daily/
projects/
people/
concepts/
handoff/
archive/raw/
INDEX.md
Record only high-value memory:
Add attributes when possible:
status: active | superseded | invalidpolarity: positive | negativeconfidence: high | medium | lowavoid_reason: ... (required for negative/pitfall entries)Avoid logging casual chat unless it impacts future execution.
For normal assistant recall, built-in memory_search is the default path. It integrates with memory-core, can include indexed session transcripts, and follows the configured OpenClaw memory backend.
Use this skill's local retrieval scripts when you want explicit file-level control, debugging, maintenance, reproducible inspections, or a second opinion against built-in recall.
Local deterministic retrieval uses weighted matching:
decision, todo, blocker, preference)projects, people, CORE)Optional local semantic retrieval is an experimental supplement, not the primary OpenClaw memory path. When local Ollama embedding is available, it can be added as a second pass after deterministic retrieval (default embedding model in config.yaml: qwen3-embedding:0.6b). This local semantic layer can coexist with built-in retrieval, but it does not replace OpenClaw's built-in memory_search contract.
This skill's semantic mode uses a local SQLite + sqlite-vec path with incremental indexing:
memory/.semantic-index.dbsqlite-vec (loaded via APSW)model, embedding dimension, and pipeline_versionSEMANTIC_MIN_SCORE (default 0.48)FUSION_MODE=rrf|linear (default rrf)RRF_K (default 60)KEYWORD_BOOST (default 0.006)FUSION_SEM_WEIGHT + FUSION_KEY_WEIGHT (defaults 0.65/0.35)session-start.sh runs memory-semantic-search.py --build-only once per daypolarity=negative or status in {invalid,superseded} are excluded from positive ranking and surfaced in a dedicated ⚠ Avoided Pitfalls warning blockDo not assume semantic retrieval is available.
If built-in vector search, sqlite-vec, Ollama embeddings, or the skill's local semantic service is unavailable, fall back to built-in memory_search in its degraded lexical mode or to this skill's deterministic local retrieval. In degraded mode:
MEMORY.md, memory/CORE/current-state.md, and recent daily logsRecommended division of labor:
MEMORY.md and memory/*.mdmemory/CORE/current-state.md as the execution truth for active workmemory-core index the same treememory_search and memory_get as the default runtime recall pathactive-memory optional and deployment-dependent, especially if pre-reply recall causes latency or timeout issuesAvoid creating separate parallel memory trees for built-in memory and Markdown memory. One shared Markdown tree is the cleanest setup.
scripts/session-start.sh: startup checklist + quick context load hintsscripts/memory-search.sh: hybrid retrieval entry (keyword first, semantic optional)config.yaml: centralized model/runtime tuning (ollama.base_url, models.*, semantic.*, fusion.*)scripts/memory-semantic-search.py: semantic recall via Ollama /api/embeddingsscripts/memory-auto-tag.py: local-model auto-tagger (model from config.yaml, default gemma4:e4b) for incremental memory changes (--mode tag|review, --files, --force)scripts/memory-write-tag.sh: write/append helper that immediately tags the target filescripts/memory-compact.py: weekly compaction (daily -> summaries, mark stale, re-sync vectors)scripts/memory-forget.py: monthly forgetting (archive old stale daily logs, keep negative pitfalls)scripts/session-close.sh: runs auto-tagger in --mode review first, then daily log append + state freshness checkscripts/auto-commit.sh: optional git safety-net commitreferences/templates.md: canonical templates for state/daily/project/handoff files