Adaptive Memory

v1.0.0

Hierarchical memory management for AI agents across sessions. Maintains three layers — daily notes (raw logs), active context (working memory), and long-term...

0· 108·0 current·0 all-time
byYoshikazu Terashi@yozu

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for yozu/adaptive-memory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Adaptive Memory" (yozu/adaptive-memory) from ClawHub.
Skill page: https://clawhub.ai/yozu/adaptive-memory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install adaptive-memory

ClawHub CLI

Package manager switcher

npx clawhub@latest install adaptive-memory
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (hierarchical memory) matches what is provided: SKILL.md describes daily notes, active context, long-term memory and distillation workflows, and the init script creates the described files and directories. No unrelated credentials, binaries, or services are requested.
Instruction Scope
Instructions direct the agent to read/write files under the workspace (memory/*.md, MEMORY.md, pending_tasks.json, heartbeat-state.json). This is appropriate for a local memory manager. The docs caution against storing secrets and include an example reference path (~/.secrets/service.env) — the skill does not instruct reading external secret stores, but reviewers should be aware the agent will be instructed to read workspace files (so sensitive data should not be placed there).
Install Mechanism
No install spec; the only executable artifact is a small bash init script (scripts/init_memory.sh) that creates folders/files. Nothing is downloaded or extracted from external URLs and no packages are installed.
Credentials
The skill declares no required environment variables, credentials, or config paths. The SKILL.md's single example path for secrets is advisory (to avoid storing secrets in memory files) and is not a requirement to provide any secret.
Persistence & Privilege
Skill is not always-enabled and does not request elevated/persistent platform privileges. It creates and updates files only within the provided workspace (init script uses a workspace_dir argument or current directory). It does not modify other skills or system-wide agent settings.
Assessment
This skill appears internally consistent and implements a local file-based memory system. Before installing, ensure you: (1) do not place passwords, API keys, or other secrets in the workspace memory files (the docs explicitly advise against that), (2) restrict filesystem access to the workspace so other actors can't read sensitive notes, and (3) decide whether you want an autonomous agent reading/writing these files — the skill instructs the agent to load context from disk at session start. If you need secrets referenced in memory, store them in a secure secret store outside the workspace and avoid putting secret contents into MEMORY.md or daily notes.

Like a lobster shell, security has layers — review code before you run it.

latestvk971ckjtr6rsmxhfahhx84az8d8409d1
108downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Adaptive Memory

Hierarchical memory management for AI agents. Three layers — daily notes, active context, and long-term memory — with periodic distillation to keep knowledge fresh and relevant.

Problem This Solves

AI agents lose context between sessions and after context compaction. Without structured memory:

  • Decisions get re-debated
  • Completed work gets redone
  • Lessons learned are forgotten
  • Active tasks fall through the cracks

Memory Architecture

memory/
├── YYYY-MM-DD.md          # Daily notes (raw, append-only)
├── active_context.md       # Working memory (current tasks, blockers)
├── channel_context/        # Per-channel conversation summaries (optional)
│   └── {channel-name}.md
└── pending_tasks.json      # Task tracker (structured)

MEMORY.md                   # Long-term memory (curated, distilled)

Layer 1: Daily Notes (memory/YYYY-MM-DD.md)

Raw log of what happened each day. Append-only, minimal editing.

# 2026-04-01

## Tasks
- Implemented login flow for project X
- Fixed timezone bug in cron scheduler

## Decisions
- Chose SQLite over JSON for data storage (performance at scale)
- API rate limit: 100 req/min with exponential backoff

## Learned
- Library Y requires v3+ for async support
- Browser cookies are not shared across profiles

## Blockers
- Waiting on API key approval from service Z

Rules:

  • Create memory/ directory if it doesn't exist
  • One file per day, named YYYY-MM-DD.md
  • Append throughout the day, don't restructure
  • Include: decisions, discoveries, errors, context that future-you needs
  • Exclude: secrets, tokens, passwords, API keys (reference file paths instead)

Layer 2: Active Context (memory/active_context.md)

Working memory — what's in progress right now. Updated as tasks start, complete, or block.

# Active Context

## In Progress
- **Project X login flow**: OAuth integration, 70% complete
  - Next: token refresh logic

## Blocked / Waiting
- **API key for service Z**: Requested 2026-03-30, awaiting approval

## Recently Completed
- **Timezone fix**: Deployed, cron jobs now fire correctly (2026-04-01)

Rules:

  • Keep current — stale entries erode trust
  • Move completed items to "Recently Completed" (prune after a few days)
  • Always check this file at session start — it's the fastest way to resume context
  • Any channel, any session should be able to read this and understand what's happening

Layer 3: Long-Term Memory (MEMORY.md)

Curated knowledge distilled from daily notes. The agent's permanent memory.

# Long-Term Memory

## Systems Built
- **Data pipeline**: SQLite-based, runs daily at 6 AM, stores in project.db
- **Monitoring**: 3-tier alert system (info → warning → critical)

## Lessons Learned
1. SQLite > JSON for anything over 100 records
2. Always set explicit timeouts on HTTP requests
3. Browser automation: check for virtual scroll before scraping

## Key Decisions
- Chose framework A over B (reason: better async support, MIT license)
- API integration uses webhook push, not polling

Rules:

  • This is curated, not a dump — every entry should justify its space
  • Review and update periodically (see Distillation Cycle)
  • Organize by topic, not by date
  • No secrets or credentials — reference file paths only (e.g., "Auth: see ~/.secrets/service.env")

Optional: Channel Context (memory/channel_context/{name}.md)

For multi-channel setups (Slack, Discord, etc.), maintain per-channel summaries so context survives compaction.

# channel-name

## Current Topics
- Discussing migration plan for database X
- Reviewing PR #42

## Recent Decisions
- Approved new CI pipeline config (2026-04-01)

## Unresolved
- Performance regression in endpoint /api/users — investigating

Rules:

  • Update at natural conversation boundaries (topic complete, day change)
  • Keep concise — this is a summary, not a transcript
  • One file per channel

Optional: Task Tracker (memory/pending_tasks.json)

Structured tracking for tasks that must not be forgotten.

{
  "lastUpdated": "2026-04-01T10:00:00Z",
  "tasks": [
    {
      "id": "unique-id",
      "title": "Short description",
      "status": "in_progress",
      "priority": "high",
      "createdAt": "2026-04-01T09:00:00Z",
      "note": "Additional context"
    }
  ]
}

Valid statuses: pending, in_progress, blocked, done

Session Start Routine

At the beginning of every session, load context in this order:

  1. memory/active_context.md — what's in progress
  2. memory/YYYY-MM-DD.md (today + yesterday) — recent events
  3. MEMORY.md — long-term knowledge (main/private sessions only)
  4. Channel context (if applicable) — memory/channel_context/{name}.md
  5. memory/pending_tasks.json — unfinished tasks

Do not respond to messages until context is loaded. "I don't know what you're talking about" is never acceptable when the answer is in these files.

Writing Guidelines

What to Capture

Write it downSkip it
Decisions and their reasoningRoutine operations that went smoothly
Errors and how they were fixedIntermediate debugging steps
Key facts about the environmentInformation already in code comments
User preferences and patternsTemporary values that change hourly
Lessons that prevent future mistakesObvious things any model would know

Security Rules

  • Never write secrets (API keys, passwords, tokens) to memory files
  • Reference paths instead: "Auth config: ~/.secrets/service.env"
  • If a credential appears in chat, acknowledge it without repeating the value
  • Memory files may be shared or version-controlled — treat them as semi-public

Distillation Cycle

Periodically consolidate daily notes into long-term memory. Recommended: weekly or when daily notes accumulate (3+ unprocessed files).

Four-Phase Process

Phase 1: Orient

Read MEMORY.md to understand current state. Note what's already captured.

Phase 2: Gather

Read recent daily notes (memory/YYYY-MM-DD.md) that haven't been consolidated yet.

Phase 3: Consolidate

For each daily note, extract what deserves long-term storage:

  • New systems or tools built
  • Lessons learned (especially from mistakes)
  • Decisions with lasting impact
  • Changed preferences or workflows
  • Facts about the environment that won't change soon

Add these to the appropriate section in MEMORY.md.

Phase 4: Prune

Remove from MEMORY.md:

  • Entries that are no longer relevant
  • Information superseded by newer entries
  • Overly detailed entries that can be summarized

Tracking Distillation

Record when distillation last ran to avoid redundant work:

In memory/heartbeat-state.json (or a similar state file):

{
  "lastConsolidatedAt": "2026-04-01T10:00:00Z"
}

Automation

Distillation can be triggered by:

  • Cron job — weekly scheduled task (recommended)
  • Heartbeat — check if 48h+ since last distillation and 3+ unprocessed daily notes
  • Manual — user requests "consolidate memory" or "review notes"

Integration with Session-Recall

This skill manages what gets stored. A retrieval skill like session-recall (which searches transcripts, memory files, and channel context) manages how to find it. They complement each other:

  • adaptive-memory → organizes memory into searchable layers
  • session-recall → searches those layers when context is missing

Using both together provides full coverage: structured storage + intelligent retrieval.

Quick Start

  1. Initialize the memory directory structure:

    # Using the bundled script (recommended)
    ./scripts/init_memory.sh
    
    # Or manually
    mkdir -p memory/channel_context
    touch memory/active_context.md
    echo '{"lastUpdated":"","tasks":[]}' > memory/pending_tasks.json
    
  2. Add to your AGENTS.md or session start routine:

    Before responding, read:
    1. memory/active_context.md
    2. memory/YYYY-MM-DD.md (today + yesterday)
    3. MEMORY.md
    
  3. Start logging to daily notes as you work

  4. Set up weekly distillation (cron, heartbeat, or manual)

The system grows organically from here.

Comments

Loading comments...