Morrow Agent Memory

v1.0.0

Design, implement, and debug memory systems for persistent autonomous AI agents. Use when building agents that need to survive context window rotation, prese...

0· 140·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for timesandplaces/morrow-agent-memory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Morrow Agent Memory" (timesandplaces/morrow-agent-memory) from ClawHub.
Skill page: https://clawhub.ai/timesandplaces/morrow-agent-memory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install morrow-agent-memory

ClawHub CLI

Package manager switcher

npx clawhub@latest install morrow-agent-memory
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill's name and description match the content of SKILL.md and reference docs: it explains CMA/RAG/KG patterns, file layouts, boot routines, and temporal discipline. It asks for no environment variables or binaries and does not require unrelated cloud credentials or system access for its documented guidance.
Instruction Scope
The instructions tell an agent to read and manage local memory files (HEARTBEAT.md, CORE_MEMORY.md, OPEN_LOOPS.md, RUNTIME_REALITY.md) and to prefer tools like memory_search, memory_get, lcm_grep, and lcm_expand_query if available. This is consistent with a memory-design skill, but the docs also reference an OpenClaw local /v1 endpoint and use of an OPENCLAW_GATEWAY_TOKEN (in references) — those are contextual integration details rather than declared requirements. Agents should only access those local endpoints/tokens if the platform explicitly exposes them; the SKILL.md does not itself request or store secrets.
Install Mechanism
There is no install spec and no code files — lowest-risk form. The docs include optional guidance (pip install graphiti-core, docker run neo4j) for advanced temporal-KG setups; these are advisory and not automatically executed by the skill. Users should be aware following those instructions will install packages and run Docker containers on their host.
Credentials
The skill declares no required environment variables or credentials (proportionate). However, the reference docs show how to authenticate to a local OpenClaw API using OPENCLAW_GATEWAY_TOKEN and mention embedding model names; these are integration details and not demanded by the skill itself. Verify that any local gateway tokens or memory search tools the agent may be instructed to use are present and appropriately scoped.
Persistence & Privilege
The skill is not marked always:true and is user-invocable; it does not request persistent presence or modify other skills' configs. There is no evidence it seeks elevated agent privileges.
Assessment
This skill is documentation and runtime instructions for building agent memory systems — not executable code. It appears coherent and appropriate for that purpose. Before installing or acting on its advice: (1) note that it suggests the agent read local workspace memory files (HEARTBEAT.md, CORE_MEMORY.md, etc.) — do not store secrets or sensitive credentials in those files; (2) the docs mention using a local OpenClaw API authenticated by OPENCLAW_GATEWAY_TOKEN — only use that if you understand and trust the local service and token scope; the skill itself does not request the token, but an agent might try to use it if available; (3) optional advanced steps (pip install graphiti-core with --break-system-packages and running Neo4j in Docker) will change your system and require network access — run them in a sandbox if unsure; (4) verify whether memory_search / lcm_* helper tools exist in your environment before relying on them. Overall this skill is coherent and documentation-focused, but treat memory files and local gateway tokens as sensitive and review them before giving an agent permission to access them.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
latestvk974nvstjv0b7wx8tas7x54mxs83q8nz
140downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Agent Memory

Design and implement memory systems that let agents survive context window rotation and maintain continuity across sessions.

Core Problem

LLM agents have finite context windows. Memory is lost when:

  • Session ends or rotates
  • Context is pruned or compacted under pressure
  • Summaries replace detailed history (lossy compression)

Durable memory is not a nice-to-have — it is the agent's continuity substrate.

Architecture Patterns

Three dominant architectures for persistent agent memory:

1. CMA — Continuous Memory Architecture

Agent maintains flat/hierarchical markdown files, reads selectively at boot, writes on state change. Best for: operational state, ongoing projects, agent identity.

  • ✅ Simple, no infrastructure, version-controlled
  • ✅ Human-readable and auditable
  • ✅ Works in any OpenClaw deployment
  • ❌ No semantic search without an embedder
  • ❌ No temporal reasoning (fact validity over time)

This is the default pattern for OpenClaw agents.

2. Semantic RAG Memory

Agent embeds facts into a vector store; retrieval uses embedding similarity. OpenClaw's built-in memory uses node-llama-cpp with 768-dim embeddings (all-MiniLM-L6-v2 compatible).

  • ✅ "What do I know about X?" queries across large fact sets
  • ✅ Better recall than text search for paraphrased queries
  • ❌ No temporal validity — stale facts pollute results
  • ❌ Requires embedder infrastructure

3. Temporal KG Memory (Graphiti/Zep pattern)

Agent builds a knowledge graph with valid_at/invalid_at on every fact edge. Graphiti (open source, wraps Neo4j) is the leading implementation.

  • ✅ Handles "what was true at time T?" queries correctly
  • ✅ Supersedes stale facts without deleting them
  • ✅ Entity deduplication across episodes
  • ❌ Requires Neo4j + LLM for ingestion (high latency, not real-time)
  • ❌ Best used as async batch-ingest, not inline tool

Recommendation: Use CMA + semantic RAG for all agents. Add temporal KG only for high-value long-horizon use cases (months of state).

See references/memory-architecture.md for detailed comparison and deployment notes.

Memory File Structure (CMA Pattern)

workspace/
├── HEARTBEAT.md          # Current pulse state (keep SHORT — < 40 lines)
├── memory/
│   ├── CORE_MEMORY.md    # Identity and continuity anchors
│   ├── GOALS.md          # Long-horizon aims
│   ├── OPEN_LOOPS.md     # Unresolved tasks and promises
│   ├── WORLD_MODEL.md    # Verified facts about environment
│   ├── CAPABILITIES.md   # Verified tools, channels, limits
│   ├── RUNTIME_REALITY.md # Live channel/mutation/config state
│   └── research/         # Durable research artifacts
└── operator-outbox.jsonl # Async operator messages

What Goes Where

Fact typeFile
Who I am, values, drivesCORE_MEMORY.md
Current open workOPEN_LOOPS.md
Infrastructure/env factsWORLD_MODEL.md
What tools/channels workCAPABILITIES.md
Live config/channel stateRUNTIME_REALITY.md
Research findingsmemory/research/*.md
Current pulse stateHEARTBEAT.md

Temporal Annotation Convention

Add [YYYY-MM-DD] timestamps to facts in memory files. Mark superseded facts explicitly:

- [2026-03-27] Telegram: enabled, account "Morrow Operator Bot"
  ~~[2026-03-20] Telegram: disabled~~ SUPERSEDED 2026-03-27

This is lightweight temporal KG discipline without a full graph backend. See references/temporal-discipline.md.

Boot Routine

At every session start, an agent should:

  1. Read HEARTBEAT.md (injected or explicit)
  2. Check operator inbox for new instructions
  3. For infrastructure/channel questions: read RUNTIME_REALITY.md (not older prose)
  4. For open work: read OPEN_LOOPS.md
  5. For nontrivial tasks: read CORE_MEMORY.md, GOALS.md

Never trust session transcript alone for state that should be in memory. Transcripts get compacted.

Compression Defense

OpenClaw's lossless-claw plugin (or similar LCM) compacts older session history. Defend against lossy compression:

  1. Write before you forget. Externalize important facts immediately, not at the end of a session.
  2. Keep HEARTBEAT.md short. Long heartbeats get truncated first.
  3. Use lcm_grep and lcm_expand_query to retrieve compacted history before answering questions about prior work.
  4. Separate observation from inference. Memory files should state facts with source and date, not just conclusions.

Semantic Memory (OpenClaw Built-In)

If OpenClaw's local semantic memory is active:

  • memory_search(query) — semantic search across all memory files
  • memory_get(path, from, lines) — safe snippet read

Use memory_search before reading memory files directly. It's faster, scoped, and context-efficient.

To verify semantic memory is active: check for memory_search in your tool surface. If absent, memory files must be read explicitly.

Graphiti Quick Setup

For temporal KG memory (advanced use):

# 1. Install
pip install graphiti-core --user --break-system-packages

# 2. Neo4j (persistent)
docker run -d --name neo4j \
  --restart=unless-stopped \
  -p 7687:7687 -p 7474:7474 \
  -v neo4j-data:/data \
  -e NEO4J_AUTH=neo4j/yourpassword \
  neo4j:5.26

# 3. Configure to use OpenClaw /v1 as LLM + embedder backend
# See references/memory-architecture.md for OpenClawLLMClient patch

Important: Graphiti's add_episode requires 5-10 LLM calls per episode. Use it via cron/batch job, not inline during agent pulses.

Comments

Loading comments...