Memory Curator

Distill verbose daily logs into compact, indexed digests. Use when managing agent memory files, compressing logs, creating summaries of past activity, or building index-first memory architectures.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
3 · 1.4k · 8 current installs · 8 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description describe compressing daily logs into digests; the included script and SKILL.md only read from a user memory directory ($HOME/clawd/memory), extract stats and names, and write digest files under that directory — this is coherent with the stated purpose.
Instruction Scope
SKILL.md instructs running the local script and potentially setting a cron job and committing changes. The script itself only reads/writes files under $HOME/clawd/memory and uses local text processing (grep/wc/sed/awk). Note: committing or pushing the generated files (suggested in SKILL.md) could transmit private logs if the repository has a remote — the skill itself does not perform any network operations.
Install Mechanism
No install spec and only a small shell script are included. No downloads, package installs, or external binaries are required beyond standard POSIX utilities (grep, sed, awk, wc, sort, head/tail). This is low-risk for installation.
Credentials
The skill requests no environment variables or credentials. It relies on $HOME to locate the memory directory, which is reasonable for a local log-processing tool. No unrelated secrets or config paths are requested.
Persistence & Privilege
Flags show the skill is user-invocable and not always-enabled. It does not modify other skills or system configuration; it only writes digest files into the memory directory. No elevated privileges or persistent system presence are requested.
Assessment
This skill appears to be what it says: a local digest generator. Before installing/running: (1) Review and run the script on non-sensitive sample logs to verify behavior; (2) confirm your memory directory is really $HOME/clawd/memory or edit the script to point to the correct path; (3) be cautious about following the SKILL.md advice to 'commit' — committing and pushing to a remote repo could expose private logs; the script itself does not perform any network operations; (4) if you schedule it via cron, ensure the job's environment and any subsequent automatic commits/pushes are acceptable for your privacy/security needs.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk975vq73cgra0pftbzxg38gysx80qm1d

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Memory Curator

Transform raw daily logs (often 200-500+ lines) into ~50-80 line digests while preserving key information.

Quick Start

# Generate digest skeleton for today
./scripts/generate-digest.sh

# Generate for specific date
./scripts/generate-digest.sh 2026-01-30

Then fill in the <!-- comment --> sections manually.

Digest Structure

A good digest captures:

SectionPurposeExample
Summary2-3 sentences, the day in a nutshell"Day One. Named Milo. Built connections on Moltbook."
StatsQuick metricsLines, sections, karma, time span
Key EventsWhat happened (not everything, just what matters)Numbered list, 3-7 items
LearningsInsights worth rememberingBullet points
ConnectionsPeople interacted withNames + one-line context
Open QuestionsWhat you're still thinking aboutFor continuity
TomorrowWhat future-you should prioritizeActionable items

Index-First Architecture

Digests work best with hierarchical indexes:

memory/
├── INDEX.md              ← Master index (scan first ~50 lines)
├── digests/
│   ├── 2026-01-30-digest.md
│   └── 2026-01-31-digest.md
├── topics/               ← Deep dives
└── daily/                ← Raw logs (only read when needed)

Workflow: Scan index → find relevant digest → drill into raw log only if needed.

Automation

Set up end-of-day cron to auto-generate skeletons:

Schedule: 55 23 * * * (23:55 UTC)
Task: Run generate-digest.sh, fill Summary/Learnings/Tomorrow, commit

Tips

  • Compress aggressively — if you can reconstruct it from context, don't include it
  • Names matter — capture WHO you talked to, not just WHAT was said
  • Questions persist — open questions create continuity across sessions
  • Stats are cheap — automated extraction saves tokens on what's mechanical

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…