Install
openclaw skills install claw-self-improving-plusTurn raw mistakes, corrections, discoveries, and repeated decisions into structured learnings and promotion candidates. Use when the user wants a conservativ...
openclaw skills install claw-self-improving-plusBuild a conservative learning pipeline. Optimize for signal, not clutter.
Do not auto-rewrite long-term memory or behavior files by default.
Use this flow:
Use these types:
mistake: the agent did something wrongcorrection: the user corrected a wrong assumption or behaviordiscovery: a useful fact about environment, tools, preferences, or workflowdecision: a durable preference, policy, or chosen designregression: a known failure mode that should not recurStore each learning candidate as JSON with these fields:
id: stable slug or timestamped idtimestampsourcetypesummarydetailsevidenceconfidencereuse_valueimpact_scopepromotion_target_candidatesstatusrelated_idsDefault enums:
confidence: low|medium|highreuse_value: low|medium|highimpact_scope: single-task|project|workspace|cross-sessionstatus: captured|scored|merged|promoted|rejectedPromote by destination, not vibes:
SOUL.md: durable style, personality, voice rulesAGENTS.md: operating rules, workflows, safety/process lessonsTOOLS.md: environment-specific commands, paths, model/tool preferencesMEMORY.md: important long-term facts about user, projects, decisions, historyIf a learning does not clearly deserve promotion, keep it in the raw log.
Score each record on five dimensions:
reuse_value: will this help again?confidence: how well supported is it?impact_scope: how broadly does it matter?promotion_worthiness: should it become a lasting rule or memory?promotion_target_candidates: where should it go if promoted?Use this practical rubric:
Prefer anchored insertion or exact replacement over blind append.
Each patch may contain:
target_fileanchorinsert_modeold_textnew_textsuggested_entryapprovedreview_statusUse exact replacement when the old text is known. Use anchored insertion when the destination section is known. Use append only as fallback.
Use a stable .learnings/ structure. See references/learning-store-layout.md.
Recommended files:
.learnings/inbox.jsonl.learnings/scored.jsonl.learnings/merge.json.learnings/patches.json.learnings/apply-report.json.learnings/archive/Append raw learnings into .learnings/inbox.jsonl.
Use scripts/capture_learning.py to create normalized records.
Run scripts/score_learnings.py on the inbox or a batch export.
Run scripts/merge_candidates.py to group likely duplicates.
Run scripts/draft_patches.py to produce anchored reviewable patch candidates.
Use scripts/review_patches.py to list, approve, reject, or skip candidates.
Examples:
python scripts/review_patches.py .learnings/patches.json list
python scripts/review_patches.py .learnings/patches.json act --index 1 --action approve
python scripts/review_patches.py .learnings/patches.json act --index 2 --action reject --note "too vague"
Run scripts/apply_approved_patches.py.
This script only applies entries explicitly approved.
It validates allowed targets, supports --dry-run, skips duplicate entries already present, and prefers exact replacement, then anchored insertion, then append fallback.
When reporting results, use this structure:
new_candidates: counthigh_priority: countmerge_groups: countpatch_candidates: short bullet listneeds_human_review: yesreferences/scoring-rubric.mdreferences/promotion-targets.mdreferences/learning-store-layout.mdscripts/capture_learning.pyscripts/score_learnings.pyscripts/merge_candidates.pyscripts/draft_patches.pyscripts/detect_patch_conflicts.pyscripts/consolidate_learnings.pyscripts/build_backlog.pyscripts/age_backlog.pyscripts/review_backlog.pyscripts/check_existing_promotions.pyscripts/review_patches.pyscripts/render_review.pyscripts/apply_approved_patches.pyscripts/archive_batch.pyscripts/run_pipeline.py