Faya Session Memory

Persistent session memory system that prevents knowledge loss after context compaction. Converts session transcripts to searchable Markdown, builds an auto-u...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 294 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description promise (persistent session memory, searchable Markdown, glossary, cron-based updating) matches the included scripts: session-to-memory.py converts session JSONL to Markdown, build-glossary.py builds a glossary index, and cron-optimizer.py suggests cron prompt improvements. All required operations (reading session logs, writing memory/*.md, generating reports) are present and appropriate for the stated purpose. Minor discrepancy: SKILL.md claims broader session-scanning semantics (e.g., scanning ~/.openclaw/agents/*/sessions/ and supporting --agent) while session-to-memory.py uses a fixed default (~/.openclaw/agents/main/sessions) and does not implement an --agent flag; this is a documentation/instruction mismatch rather than malicious functionality.
Instruction Scope
Runtime instructions tell the user/agent to run the shipped scripts and to create cron jobs to run them periodically. The instructions reference the right files and behavior, but include some inaccurate CLI/docs details: SKILL.md documents an --agent option and scanning wildcard paths that the converter script does not implement. The scripts read and write only local files under ~/.openclaw (sessions, workspace, cron JSON) and do not attempt to read other unrelated system paths or send data externally.
Install Mechanism
No install spec is provided (instruction-only skill with shipped scripts). Nothing is downloaded or executed from an external URL; scripts are plain Python files intended to be run locally. This is the lowest-risk install posture.
Credentials
The skill declares no required environment variables or credentials. The code optionally respects a WORKSPACE env var for the glossary builder; otherwise it uses user-home ~/.openclaw paths. The scripts operate on local session and cron JSON files only — there are no requests for unrelated secrets or cloud credentials.
Persistence & Privilege
always is false and the skill does not attempt to enable itself, modify other skills, or write to global system configuration. It writes files under the user's ~/.openclaw/workspace/memory/ and ~/.openclaw/cron report locations, which is expected for a local memory/indexing tool. Cron jobs are suggested but not auto-installed.
Assessment
This skill appears to do what it says: convert local OpenClaw session logs into Markdown, build a searchable glossary, and suggest cron-based updates. Before installing/running it, consider the following: - Review the scripts locally — they read and write files under your home directory (~/.openclaw/...). If your session logs contain very sensitive data (passwords, private keys, personal PII), decide whether you want those written into new Markdown transcripts or included in vector search indexes. - Note the documentation mismatch: SKILL.md mentions scanning multiple agent folders and a --agent option that session-to-memory.py does not implement. The scripts mainly target ~/.openclaw/agents/main/sessions and ~/.openclaw/workspace/memory; if you have multiple agents, adapt the script or run it per-agent manually. - The tool does not contact external servers or require credentials, and it does not auto-install cron jobs; you must create cron entries yourself if you want automated runs. - Test on a copy: run the scripts in a safe test workspace or with a small set of session files (use --new/--force flags where applicable) to verify outputs and truncation behavior meet your privacy expectations. If you want higher confidence, provide: (1) sample session file paths you use (to confirm the script will find them), or (2) any custom agent layout so we can confirm the script will target the right directories. If the skill claimed networked features or required secrets, that would raise concerns — it does not.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk9748fzk6qq7nmdz4jqzj14vx981pjzd

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Session Memory

Solve the #1 problem with long-running AI agents: knowledge loss after context compaction.

The Problem

When sessions compact (summarize old messages to free context), specific details are lost: names, decisions, file paths, reasoning. The agent retains a summary but loses the ability to recall "What exactly did Annika say?" or "When did we decide to use v6 format?"

The Solution: Three-Layer Memory Architecture

Layer 1: MEMORY.md          — Curated long-term memory (human-edited)
Layer 2: SESSION-GLOSSAR.md — Auto-generated structured index (people/projects/decisions/timeline)
Layer 3: memory/sessions/   — Full session transcripts as searchable Markdown

All three layers live under memory/ and are automatically vectorized by OpenClaw's memory search, creating a navigational hierarchy: glossary finds the right session, session provides the details.

Setup (run once)

Step 1: Convert existing sessions to Markdown

python3 scripts/session-to-memory.py

This scans all JSONL session logs in ~/.openclaw/agents/*/sessions/ and converts them to memory/sessions/session-YYYY-MM-DD-HHMM-*.md. Truncates long assistant responses to 2KB, skips system messages, tracks state to avoid re-processing.

Options:

  • --new — Only convert sessions not yet processed (for incremental runs)
  • --agent main — Specify agent ID (default: main)

Step 2: Build the glossary

python3 scripts/build-glossary.py

Scans all session transcripts and builds memory/SESSION-GLOSSAR.md with:

  • People — Who was mentioned, in how many sessions, date ranges
  • Projects — Which projects discussed, with relevant topic tags
  • Topics — Categorized themes (Email Drafts, Website Build, Security, etc.)
  • Timeline — Per-day summary (session count, people, topics)
  • Decisions — Extracted decision-like statements with dates

Options:

  • --incremental — Only process new sessions (uses cached scan state)

Step 3: Set up cron jobs for auto-updates

Create two cron jobs (use a cheap model like Gemini Flash):

Job 1: Session sync + glossary rebuild (every 4-6 hours)

Task: Run `python3 scripts/session-to-memory.py --new` then
      `python3 scripts/build-glossary.py --incremental`.
      Report how many new sessions were converted and indexed.

Optional Job 2: Pre-compaction memory flush check Already built into AGENTS.md by default — just ensure the agent writes to memory/YYYY-MM-DD.md before each compaction.

Customizing Entity Detection

Edit scripts/build-glossary.py to add your own known people and projects:

KNOWN_PEOPLE = {
    "alice": "Alice Smith — Project Manager",
    "bob": "Bob Jones — CTO",
}

KNOWN_PROJECTS = {
    "website-redesign": "Website Redesign — Q1 Initiative",
    "api-migration": "API Migration — v2 to v3",
}

The glossary also detects topics via regex patterns. Add new patterns in the topic_patterns dict for your domain.

How It Works With memory_search

Once set up, memory_search("Alice project decision") will find:

  1. The glossary entry for Alice (which sessions she appears in)
  2. The actual session transcript where the decision was discussed
  3. Any MEMORY.md entry about Alice

This gives the agent a navigation layer (glossary) plus detail access (transcripts) — much better than either alone.

File Structure After Setup

memory/
├── MEMORY.md                    — Curated (you maintain this)
├── SESSION-GLOSSAR.md           — Auto-generated index
├── YYYY-MM-DD.md                — Daily notes
├── .glossary-state.json         — Glossary builder state
├── .glossary-scans.json         — Cached scan results
└── sessions/
    ├── .state.json              — Converter state
    ├── session-2026-01-15-0830-abc123.md
    ├── session-2026-01-15-1200-def456.md
    └── ...

Cron Memory Optimizer

Cron jobs run in isolated sessions with zero memory context. The optimizer analyzes your cron jobs and suggests memory-enhanced versions:

python3 scripts/cron-optimizer.py

This scans ~/.openclaw/cron/jobs.json, identifies jobs that would benefit from memory context, and generates memory/cron-optimization-report.md with before/after prompts and implementation guidance.

Example optimization:

Original: "Run daily research scout..."
Enhanced: "Before starting: Use memory_search to find recent context about research activities. Check memory/SESSION-GLOSSAR.md for relevant people, projects, and recent decisions. Then proceed with the original task using this context.

Run daily research scout..."

The script is conservative (suggests only, never auto-modifies) and skips monitoring jobs that don't need context.

Tips

  • Run the full rebuild (python3 scripts/build-glossary.py without --incremental) occasionally to pick up improvements to entity detection
  • The glossary is most useful when KNOWN_PEOPLE and KNOWN_PROJECTS are populated — spend 5 minutes adding your key contacts and projects
  • For agents that run 24/7, the cron job keeps everything current automatically
  • Session transcripts can get large (our 297 sessions = 24MB) — this is fine, OpenClaw's vector search handles it efficiently
  • Use the cron optimizer after setting up memory to enhance existing automation

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…