Self Optimization

v1.0.2

Turn mistakes, corrections, dead ends, and repeated fixes into durable improvements. Use when work reveals a non-obvious lesson, a recurring failure, a missi...

0· 137·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for alethean-kaw/self-optimization.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Self Optimization" (alethean-kaw/self-optimization) from ClawHub.
Skill page: https://clawhub.ai/alethean-kaw/self-optimization
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install self-optimization

ClawHub CLI

Package manager switcher

npx clawhub@latest install self-optimization
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the provided files. The package includes templates, lightweight hooks, and helper scripts to record and promote learnings; these are appropriate for a "self-optimization" skill.
Instruction Scope
SKILL.md instructs creating and writing to a workspace .learnings/ inbox and promoting entries. The included hooks and scripts are limited to emitting reminders, scanning local tool output for common error patterns, and scaffolding new skill templates. One runtime script reads the CLAUDE_TOOL_OUTPUT environment variable (used to surface tool output) — this is consistent with an error-detector hook but is not declared in requires.env, so it's worth noting.
Install Mechanism
No external downloads or package installs. This is effectively instruction-plus-local-scripts; everything is contained in the repo and scripts are executed locally. No URLs, extract operations, or third-party installers are used.
Credentials
The skill declares no required environment variables or credentials, which is appropriate. The only environment usage observed is reading CLAUDE_TOOL_OUTPUT in scripts/error-detector.sh (expected for a PostToolUse hook). No other secrets, tokens, or unrelated credentials are requested.
Persistence & Privilege
always:false and no attempts to modify other skills or global agent config were found. The hook injects a virtual bootstrap file and scripts create files under the current workspace or a relative ./skills path — this is proportionate for the stated purpose.
Assessment
This skill is internally coherent and appears safe in that it only writes local markdown entries, emits reminders, and scaffolds templates. Before installing: (1) review the scripts (scripts/*.sh) to confirm you are comfortable with local writes; they create files under the workspace and avoid writing outside the current directory; (2) enabling hooks is opt-in — only enable OpenClaw/Codex/Claude hooks if you want the reminders; (3) the error detector reads CLAUDE_TOOL_OUTPUT (tool output provided by the agent runtime) to detect failures — that data stays local unless you explicitly add forwarding; (4) ensure scripts are executable and the workspace .learnings/ location has appropriate permissions; (5) if you prefer not to have automatic reminders, install without enabling the hooks or narrow the hook matcher. Overall, nothing here indicates credential exfiltration or remote code fetching, but always review and only enable hooks you trust.

Like a lobster shell, security has layers — review code before you run it.

latestvk9792cy0pxpwp2sv754wy1d791842xsz
137downloads
0stars
3versions
Updated 3w ago
v1.0.2
MIT-0

Self-Optimization

Use this skill to close the loop after real work. The goal is not just to log what went wrong. The goal is to convert signal from mistakes, corrections, and repeated effort into stronger future behavior.

Core Loop

  1. Detect meaningful signal.
  2. Capture it in .learnings/.
  3. De-duplicate and link related entries.
  4. Promote stable patterns into durable guidance.
  5. Extract reusable skills when the pattern is broad and proven.

Quick Reference

SituationAction
Command, tool, or integration fails unexpectedlyAppend an entry to .learnings/ERRORS.md
User corrects the agent or provides missing factsAppend an entry to .learnings/LEARNINGS.md
A better repeatable approach is discoveredAppend an entry to .learnings/LEARNINGS.md
User asks for a missing capabilityAppend an entry to .learnings/FEATURE_REQUESTS.md
Same issue keeps reappearingLink entries, bump priority, and consider promotion
Pattern is stable across tasksPromote to AGENTS.md, CLAUDE.md, TOOLS.md, SOUL.md, or .github/copilot-instructions.md
Pattern is reusable beyond one repoExtract a new skill scaffold

Detection Triggers

Capture a learning when any of these happen:

  • The first attempt was wrong and needed correction.
  • A tool or command failed in a non-obvious way.
  • The user revealed a project convention that was not documented.
  • The agent discovered a stronger pattern than the one it started with.
  • The same workaround or warning has appeared more than once.
  • The user asked for a capability the current system does not provide.

Skip noisy one-off trivia. Capture things that would realistically save a future session time, confusion, or rework.

Log Files

Create a local .learnings/ directory in the workspace or in the OpenClaw workspace.

.learnings/
├── LEARNINGS.md
├── ERRORS.md
└── FEATURE_REQUESTS.md

LEARNINGS.md

Use for:

  • corrections
  • knowledge gaps
  • best practices
  • project conventions
  • improved workflows

Template:

## [LRN-YYYYMMDD-XXX] category

**Logged**: 2026-04-01T10:00:00Z
**Priority**: low | medium | high | critical
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config

### Summary
One-line statement of the lesson.

### Details
What was wrong, what changed, and what is now known to be correct.

### Suggested Action
What to do differently next time.

### Metadata
- Source: conversation | debugging | user_feedback | simplify-and-harden
- Related Files: path/to/file
- Tags: tag-a, tag-b
- See Also: LRN-20260401-001
- Pattern-Key: optional.stable.key
- Recurrence-Count: 1
- First-Seen: 2026-04-01
- Last-Seen: 2026-04-01

---

ERRORS.md

Use for:

  • command failures
  • exceptions
  • bad tool assumptions
  • API or integration breakage

Template:

## [ERR-YYYYMMDD-XXX] command_or_tool

**Logged**: 2026-04-01T10:00:00Z
**Priority**: medium
**Status**: pending
**Area**: backend | infra | tests | docs | config

### Summary
Short description of the failure.

### Error
```text
Actual error output goes here.
```

### Context
- Command or action attempted
- Relevant inputs
- Environment details if useful

### Suggested Fix
What should be tried next or documented.

### Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file
- See Also: ERR-20260401-001

---

FEATURE_REQUESTS.md

Use for:

  • missing tooling
  • automation requests
  • product gaps
  • missing agent behaviors

Template:

## [FEAT-YYYYMMDD-XXX] capability_name

**Logged**: 2026-04-01T10:00:00Z
**Priority**: low | medium | high
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config

### Requested Capability
What the user wanted.

### User Context
Why they wanted it.

### Complexity Estimate
simple | medium | complex

### Suggested Implementation
How it might be built or extended.

### Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature

---

ID Format

Use TYPE-YYYYMMDD-XXX.

  • LRN for learning
  • ERR for error
  • FEAT for feature request

Examples:

  • LRN-20260401-001
  • ERR-20260401-002
  • FEAT-20260401-003

Promotion Rules

Promote an entry when it becomes more valuable as guidance than as a historical note.

TargetPromote When
CLAUDE.mdProject facts, conventions, or recurring gotchas
AGENTS.mdWorkflow rules, delegation patterns, automation steps
.github/copilot-instructions.mdRepo guidance that should reach Copilot
TOOLS.mdTool quirks, auth requirements, environment gotchas
SOUL.mdBehavioral or communication rules for OpenClaw sessions

Promotion checklist:

  1. Distill the learning into a short prevention rule.
  2. Add it to the right target file.
  3. Update the original entry status to promoted.
  4. Record where it was promoted.

Recurrence And Dedupe

Before creating a new entry for a familiar issue:

  1. Search .learnings/ for a related keyword or Pattern-Key.
  2. If a related item exists, link it with See Also.
  3. Increase Recurrence-Count and refresh Last-Seen.
  4. Escalate priority if the pattern is recurring and costly.

Recurring issues often mean one of three things:

  • documentation is missing
  • automation is missing
  • the architecture or workflow is inviting the same failure

When To Extract A Skill

Extract a reusable skill when the pattern is:

  • resolved and trustworthy
  • useful across multiple tasks
  • non-obvious enough to justify explicit guidance
  • portable beyond a single private incident

Use the helper:

./skills/self-optimization/scripts/extract-skill.sh my-new-skill --dry-run
./skills/self-optimization/scripts/extract-skill.sh my-new-skill

Then customize the generated SKILL.md and update the original learning entry with:

  • Status: promoted_to_skill
  • Skill-Path: skills/my-new-skill

Review Rhythm

Review .learnings/ at these checkpoints:

  • before major tasks
  • after finishing a feature or bugfix
  • when working in an area with previous failures
  • during periodic maintenance

Useful checks:

grep -h "Status\\*\\*: pending" .learnings/*.md | wc -l
grep -B5 "Priority\\*\\*: high" .learnings/*.md | grep "^## \\["
grep -l "Area\\*\\*: backend" .learnings/*.md

OpenClaw Setup

OpenClaw works especially well with this skill because workspace files and hooks let the improvement loop stay visible between sessions.

Install

clawdhub install self-optimization

Manual install:

git clone <your-fork-or-source-repo> ~/.openclaw/skills/self-optimization

This package is an OpenClaw-oriented evolution of the earlier self-learning workflow.

Hook Setup

Optional bootstrap reminder:

cp -r hooks/openclaw ~/.openclaw/hooks/self-optimization
openclaw hooks enable self-optimization

Workspace Layout

~/.openclaw/workspace/
├── AGENTS.md
├── SOUL.md
├── TOOLS.md
├── MEMORY.md
├── memory/
└── .learnings/
    ├── LEARNINGS.md
    ├── ERRORS.md
    └── FEATURE_REQUESTS.md

Hook Support For Other Agents

Claude Code / Codex

Use hook scripts in settings:

{
  "hooks": {
    "UserPromptSubmit": [{
      "matcher": "",
      "hooks": [{
        "type": "command",
        "command": "./skills/self-optimization/scripts/activator.sh"
      }]
    }],
    "PostToolUse": [{
      "matcher": "Bash",
      "hooks": [{
        "type": "command",
        "command": "./skills/self-optimization/scripts/error-detector.sh"
      }]
    }]
  }
}

GitHub Copilot

Add a reminder to .github/copilot-instructions.md:

## Self-Optimization

After solving non-obvious issues, consider:
1. Logging the lesson to `.learnings/`
2. Linking related recurring entries
3. Promoting stable rules into repo guidance
4. Extracting reusable skills when the pattern is broad

Best Practices

  1. Log signal, not noise.
  2. Prefer prevention rules over postmortems.
  3. Link related incidents instead of duplicating them.
  4. Promote broadly useful guidance quickly.
  5. Treat repeated friction as a systems problem, not just a note-taking problem.
  6. Review learnings before repeating the same class of work.

Comments

Loading comments...