Install
openclaw skills install openclaw-memory-dreamingA Markdown + JSON memory framework with conversation archiving for AI agents. Provides persistent long-term memory with biologically-inspired decay, recall boosting, temporal fact chains, dream-cycle consolidation, and channel-agnostic conversation archiving with AI-generated summaries. No vector database, graph store, or external service required. Use when you need: agent memory that persists across sessions, conversation context across channels/groups/topics, fact lifecycle tracking (supersession), or automated memory maintenance via dream cycles.
openclaw skills install openclaw-memory-dreamingAgent-native memory and dreaming framework. Plain Markdown and JSON — no RAG, no vector store, no graph database. Just files your human can read and edit.
References (load when needed):
references/architecture.md— data model, biological inspiration, design decisionsreferences/dream-cycle.md— full dream cycle procedure with examplesreferences/cold-start.md— starting from zero memoriesreferences/cron-templates.md— ready-to-use OpenClaw cron definitions
cp skills/memory-dreaming/scripts/*.js scripts/
Scripts use path.resolve(__dirname, '..') for workspace root — they must
live in <workspace>/scripts/.
node scripts/memory-bootstrap.js # seed memory-meta.json
node scripts/conversation-archive.js --discover # see available channels
Starting from scratch? See references/cold-start.md.
Create archives/archive-config.json to label your groups:
{
"agentName": "YourName",
"groups": { "<group-id>": { "name": "my-group", "label": "My Group" } },
"topicNames": { "<group-id>": { "1": "General" } }
}
Then archive and summarise:
node scripts/conversation-archive.js --all
node scripts/conversation-summarise.js --all
Your curated knowledge — facts, preferences, decisions, people.
| File | Purpose |
|---|---|
MEMORY.md | Long-term memory (human-readable, agent-edited) |
memory/YYYY-MM-DD.md | Daily notes (raw session logs) |
memory/memory-meta.json | Decay metadata per entry |
memory/dream-log.md | Dream cycle audit trail |
memory/archive/YYYY-MM.md | Archived (forgotten) entries |
Context from group chats, channels, topics — archived and summarised.
| File | Purpose |
|---|---|
archives/<channel>/<group>/raw/ | Full conversation transcripts |
archives/<channel>/<group>/summaries/ | AI-generated topic summaries |
archives/<channel>/<group>/DIGEST.md | Cross-topic master digest |
archives/<channel>/<group>/INDEX.md | Topic index with message counts |
| Tier | Age | Decays? | Notes |
|---|---|---|---|
hot | <48h | Yes | New entries |
warm | <30d | Yes | Recent |
cold | <365d | Yes | Long-term |
archived | — | Removed | score<0.1 + recalls<2 |
crystallised | ∞ | Never | 20+ recalls |
Structural entries (IPs, people, URLs, passwords) never decay below 0.3.
baseDecay = 1.0 - (daysSinceCreated / maxAgeDays)
recallBoost = min(recallCount × 0.05, 0.5)
recencyBoost = lastRecalled ≤7d → 0.2 | ≤30d → 0.1 | else → 0
decayScore = clamp(base + recall + recency, 0.0, 1.0)
| Script | Purpose | When to run |
|---|---|---|
memory-bootstrap.js | Seed meta from MEMORY.md | Setup + after adding entries |
memory-decay.js | Recalculate scores, tier transitions | Start of dream cycle |
memory-recall-logger.js | Log recall events, boost scores | After every memory search |
memory-supersede.js | Create temporal fact chains | When facts change |
| Script | Purpose | When to run |
|---|---|---|
conversation-archive.js | Archive channel transcripts | Nightly (before summarise) |
conversation-summarise.js | AI-summarise topics + digest | Nightly (after archive) |
# Core memory
node scripts/memory-decay.js --verbose
node scripts/memory-recall-logger.js --query "dev server IP" --matches "Dev server: user@203.0.113.10"
node scripts/memory-supersede.js --old "Status: pending" --new "Status: accepted"
# Conversation memory
node scripts/conversation-archive.js --discover # list available sessions
node scripts/conversation-archive.js --all # archive everything
node scripts/conversation-archive.js --channel telegram --group my-group
node scripts/conversation-summarise.js --all # summarise all archived
node scripts/conversation-summarise.js --force # re-summarise everything
MEMORY.md for context.node scripts/memory-recall-logger.js --query "<search>" --matches "<matched line>"
MEMORY.md under the right section.memory-supersede.js.memory/YYYY-MM-DD.md with what happened.Agent-orchestrated consolidation. Run nightly or during quiet heartbeats.
node scripts/memory-decay.jsmemory/YYYY-MM-DD.md filesMEMORY.mdmemory/dream-log.mdnode scripts/memory-bootstrap.js (picks up new entries)Full procedure with worked examples: references/dream-cycle.md.
Cron job definitions: references/cron-templates.md.
When receiving a message from a group/topic, load context before responding:
archives/<channel>/<group>/summaries/topic-<id>.md for topic contextDIGEST.mdThis prevents the "I don't know what you're talking about" problem in long-running group conversations.
summariseModel in archive-config.json.This is a work in progress. Start simple, observe what works, add complexity when the simple thing breaks.
Core memory scripts (4): No credentials needed. These are pure local file operations — read/write Markdown and JSON in your workspace.
Conversation summariser: Requires an LLM API key. Set one of:
OPENROUTER_API_KEY in .env.openrouter or environmentOPENAI_API_KEY in .env.openai or environmentThe summariser sends conversation transcripts to the configured LLM provider for summarisation. This means your chat content is sent to a third-party API. Use a self-hosted model or review transcripts before summarising if this is a concern.
Conversation archiver: No credentials. Reads local OpenClaw session transcripts and writes local Markdown files.
MEMORY.md and memory/ files (your workspace)sessions.json + .jsonl files).env.openrouter or .env.openai (for API keys, summariser only)archives/archive-config.json (optional config you create)memory/memory-meta.json, memory/dream-log.md, memory/archive/archives/<channel>/<group>/ (raw transcripts, summaries, digests)conversation-summarise.js sends data externally (to your
configured LLM API for summarisation). All other scripts are fully local.