Agent Memory Loop
v2.1.1Lightweight self-improvement loop for AI agents. Capture errors, corrections, and discoveries in a fast one-line format, dedup them, queue recurring or criti...
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description match behavior: the skill provides local one-line learnings, dedup, and a promotion queue. Required binaries (grep, date) and included scripts are appropriate and proportional to the stated purpose.
Instruction Scope
SKILL.md limits actions to creating and scanning .learnings/*.md, queuing candidates, and asking humans to approve promotions. Instructions do not reference external endpoints, unrelated config paths, or secret environment variables. The workflow relies on conventions (source:agent/user/external) which must be followed to be effective.
Install Mechanism
No external install spec; included install/setup scripts only create a local .learnings directory and copy bundled assets. No downloads from arbitrary URLs or archive extraction are present.
Credentials
No environment variables or credentials are required. The skill reads and writes only to a workspace-local .learnings directory, which is appropriate for its purpose.
Persistence & Privilege
always:false and normal agent invocation are used. The skill does not modify other skills or system-wide agent settings. It intentionally advises against auto-writing instruction files (promotions require human approval).
Assessment
This skill appears coherent and local-only: it sets up a .learnings folder, provides grep/date-based review tooling, and explicitly avoids auto-writing instruction files. Before installing: (1) inspect scripts (they are short and local) and run them in a safe workspace (not system root) to avoid accidental file changes; (2) ensure your agent/human reviewers follow the source labeling convention (mislabeling an external finding as source:agent could bypass the intended review protection); (3) run review.sh periodically to surface pending promotions and stale items; and (4) if you rely on date features, test review.sh on your platform (the script tries BSD/GNU date variants). If you need stricter guarantees, add automation that enforces source labels or restricts who can change promotion-queue.md.Like a lobster shell, security has layers — review code before you run it.
Runtime requirements
Binsgrep, date
errorslatestlearningmemorysafetyself-improvement
Agent Memory Loop
Lightweight learning for agents that reset between sessions.
Use this when
- you want a low-friction way to log mistakes, corrections, and discoveries
- you need recurring lessons without bloating core instructions
- you want human-reviewed promotion instead of auto-writing to instruction files
- you want a quick pre-task scan for known failure patterns
Do not use it for
- autonomous self-modification
- external content promotion
- heavy multi-section incident writeups by default
- dashboards, registries, or process ceremony
Core workflow
error / correction / discovery
↓
log one line in .learnings/
↓
dedup by id, then keyword
↓
count:3+ or severity:critical → promotion-queue
↓
human reviews promotion
↓
check relevant learnings before major work
↓
increment prevented:N when a learning actually changed behavior
Install
bash scripts/install.sh
Creates:
.learnings/
errors.md
learnings.md
wishes.md
promotion-queue.md
details/
archive/
Minimal instruction snippet
Add this to your agent instructions:
## Self-Improvement
Before major tasks: grep .learnings/*.md for relevant past issues.
After errors or corrections: log a one-line entry using agent-memory-loop.
Never auto-write to SOUL.md, AGENTS.md, TOOLS.md, or similar instruction files.
Stage candidate rule changes in .learnings/promotion-queue.md for human review.
The format, in short
One incident or discovery per line. Extra fields are optional.
[YYYY-MM-DD] id:ERR-YYYYMMDD-NNN | COMMAND | what failed | fix | count:N | prevented:N | severity:medium | source:agent
[YYYY-MM-DD] id:LRN-YYYYMMDD-NNN | CATEGORY | what | action | count:N | prevented:N | severity:medium | source:agent
[YYYY-MM-DD] CAPABILITY | what was wanted | workaround | requested:N
[YYYY-MM-DD] id:LRN-YYYYMMDD-NNN | proposed rule text | target: AGENTS.md | source:agent | evidence: count:N prevented:N | status: pending
Key fields:
count:Ntracks recurrenceprevented:Ntracks loop closureseverity:criticalforces review even at count 1source:externalis never promotable
Operating rules
- Log fast; prefer a one-line entry over a perfect writeup
- Dedup before appending
- Queue recurring or critical lessons for review
- Humans approve promotions; agents do not
- Before major work, scan for relevant prior failures
- If a learning prevented a repeat mistake, record that with
prevented:N
References
references/logging-format.md— canonical line formats, fields, examples, source labelsreferences/operating-rules.md— dedup, review queue, pre-task review, trimming rulesreferences/promotion-queue-format.md— queue entry structure and status lifecyclereferences/detail-template.md— optional detail-file template for complex failuresreferences/design-tradeoffs.md— why this stays lean instead of turning into a system
Assets and scripts
assets/errors.mdassets/learnings.mdassets/wishes.mdassets/promotion-queue.mdscripts/install.shscripts/setup.shscripts/review.sh
Success condition
The loop is working if agents actually use it:
- learnings are cheap to log
- duplicates stay low
- recurring lessons reach the queue
- promotions stay human-approved
prevented:Nstarts climbing on real work
Comments
Loading comments...
