Memory Stack Core

v1.0.0

Core memory resilience layer: WAL (Write-Ahead Log), Working Buffer, and three-layer memory integration. Prevents context loss during compaction and ensures...

0· 104·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for neroagent/memory-stack-core.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Memory Stack Core" (neroagent/memory-stack-core) from ClawHub.
Skill page: https://clawhub.ai/neroagent/memory-stack-core
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install memory-stack-core

ClawHub CLI

Package manager switcher

npx clawhub@latest install memory-stack-core
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the delivered artifacts: SKILL.md describes a WAL, working buffer, daily logs and recovery; the included scripts/run.py implements file-backed WAL and buffer, reads a local config (memory-stack-config.json), and exposes the declared tool-like actions (wal_write, wal_read, buffer_write, buffer_read, memory_health, wal-auto). No unrelated binaries, env vars, or services are requested.
Instruction Scope
Instructions and script intentionally read and write files under the workspace/memory directory (wal.jsonl, working-buffer.md, daily logs). This is consistent with the purpose, but the automatic 'auto_capture' behavior scans every human message for paths, URLs, decisions, preferences, values, and corrections and appends matches to the WAL — meaning any sensitive specifics typed by users (passwords, API keys, tokens, paths, URLs) can be persisted to disk. That behavior is expected for this purpose but is a privacy/sensitive-data risk and should be configured/limited if needed.
Install Mechanism
No install spec is provided (instruction-only + included script). Nothing is downloaded or executed from external URLs; the included Python script is the only code. This is low-risk from an install mechanism perspective.
Credentials
The skill requests no environment variables, no external credentials, and no config paths outside the workspace. Its file I/O is limited to workspace/memory and a workspace config file — proportionate to the stated purpose.
Persistence & Privilege
always:false (no forced inclusion). The skill is invokable/autonomous by default (normal), and its core privilege is persistent local storage in the workspace. That persistence is necessary for the feature but increases the blast radius if the workspace is backed up, uploaded, or under version control. The skill does not modify other skills or system-wide agent settings.
Assessment
This skill does what it says (local WAL + working buffer) and contains no network exfiltration or secret-env requirements, but it will persist specifics from user messages to files in your workspace by default. Before installing: (1) decide whether you want automatic capture enabled and, if not, disable auto_capture in memory-stack-config.json; (2) ensure the memory/ directory is excluded from backups and version control (add to .gitignore) and set restrictive file permissions; (3) lower max_entries / max_size_mb or enable rotation to limit retained data; (4) test in an isolated workspace first to confirm behavior; (5) avoid pasting secrets (API keys, passwords) into chats that will be captured, or add filtering rules. If you need stricter guarantees (encryption, secure storage, remote retention policies), consider hardening or rejecting this skill until those controls exist.

Like a lobster shell, security has layers — review code before you run it.

compactionvk977723fe98qvc8at14315ww3s840xnacorevk977723fe98qvc8at14315ww3s840xnalatestvk977723fe98qvc8at14315ww3s840xnamemoryvk977723fe98qvc8at14315ww3s840xnapersistencevk977723fe98qvc8at14315ww3s840xna
104downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Memory Stack Core

Transforms your agent's memory from fragile to antifragile. Implements proven patterns from the Claude Code leak and ClawHub's compaction-survival + session-persistence.

The Problem

LLM context windows fill up. When compaction happens, older messages get summarized. Summaries lose precision:

  • Exact file paths → "some file"
  • Specific numbers → "approximately 42"
  • Decisions → "we decided to do something"
  • Preferences → forgotten

Your agent wakes up after compaction dumber. Every session restarts from scratch.

The Solution: Three-Layer Memory Stack

┌────────────────────────────────────────┐
│   Long-term (MEMORY.md)                │  ← Curated wisdom, never edit manually
├────────────────────────────────────────┤
│   Daily Logs (memory/YYYY-MM-DD.md)    │  ← Conversation summaries
├────────────────────────────────────────┤
│   Working Buffer (memory/working-buffer.md)  │  ← Danger zone captures (60%+ context)
├────────────────────────────────────────┤
│   WAL (memory/wal.jsonl)               │  ← Write-Ahead Log: specifics as they appear
└────────────────────────────────────────┘

WAL (Write-Ahead Log)

When: Immediately upon receiving a human message that contains any:

  • Corrections ("Actually it's X not Y")
  • Proper nouns (names, places, products)
  • Preferences ("I prefer...")
  • Decisions ("Let's do X")
  • Draft changes (edits to active work)
  • Specific values (numbers, dates, IDs, URLs, paths)

What: Write a structured JSON line to memory/wal.jsonl with:

{
  "timestamp": "2026-04-01T16:20:00Z",
  "category": "decision|preference|path|value|correction|draft",
  "content": "the specific detail",
  "context": "surrounding message snippet"
}

Why: WAL entries are tiny, numerous, and survive forever. They're the source of truth for specifics.

Working Buffer

When: Token utilization reaches 60% (tracked via session_status).

What: Append every human + assistant exchange (full text) to memory/working-buffer.md:

## 2026-04-01 16:25:00 (turn 47)

**User:**
<message>

**Assistant:**
<response>

Why: The buffer is a file, so it doesn't count against context. It's your safety net for the danger zone. When compaction inevitably happens, you can recover from the buffer.

Daily Logs & Long-term

Already in OpenClaw. We integrate by:

  • At 80% context, suggest /wrap_up to flush to daily log
  • Periodic (weekly) review to promote daily log entries to MEMORY.md

Usage

Automatic Mode (recommended)

The skill hooks into your agent's message processing:

  1. Install skill
  2. Enable WAL and buffer in agent config (or use defaults)
  3. Nothing else — the skill automatically:
    • Scans human messages for specifics → WAL
    • Monitors token usage → activates buffer at 60%
    • Provides /memory_health command to view status

Manual Commands

  • tool("memory-stack-core", "wal_write", {...}) — manually add WAL entry
  • tool("memory-stack-core", "wal_read", {"limit": 50}) — view recent WAL
  • tool("memory-stack-core", "buffer_read", {"tail": 1000}) — view buffer tail
  • tool("memory-stack-core", "memory_health", {}) — get health report

Recovery Protocol

When context is lost (e.g., after compaction or new session):

  1. Read memory/working-buffer.md last entries
  2. Read recent WAL entries (last 50)
  3. Read yesterday's + today's memory/YYYY-MM-DD.md
  4. Reconstruct missing specifics

The skill provides a recover() helper (used automatically by agent if configured).

Configuration

Create memory-stack-config.json in workspace root (optional):

{
  "wal": {
    "enabled": true,
    "auto_capture": true,
    "max_entries": 10000
  },
  "buffer": {
    "enabled": true,
    "threshold_token_percent": 60,
    "max_size_mb": 10
  },
  "integration": {
    "auto_wrap_up_at_token_percent": 80,
    "include_buffer_in_wrap_up": true
  }
}

Performance

  • WAL write: <1ms (append to file)
  • Buffer append: <1ms
  • Memory overhead: ~100B per WAL entry; ~1KB per buffer turn
  • Disk: WAL grows ~1-2KB per conversation; buffer ~5-10KB per session

Negligible impact.

Compatibility

  • Works with any OpenClaw agent (uses standard tool interface)
  • No external dependencies
  • Compatible with compaction-survival patterns (this is an implementation)
  • Enhances session-persistence by providing WAL + buffer layers

FAQ

Q: Do I need to change my agent?
A: Only to optionally call memory_health or recover if you want explicit control. Otherwise install and go.

Q: What if I already use session-persistence?
A: This skill implements the WAL + buffer layers that session-persistence mentions. They're complementary.

Q: Will WAL fill my disk?
A: WAL is capped at max_entries (default 10k). Old entries can be archived to memory/wal-archive.jsonl monthly.

Q: Can I use without ToolRegistry?
A: Yes, the skill provides standalone scripts too (scripts/wal.py, scripts/buffer.py).

License

Commercial. One-time purchase includes lifetime updates. Team licenses allow unlimited agents.


Built with insights from the Claude Code leak and ClawHub community.

Comments

Loading comments...