Self Improvement (done properly)

v1.0.0

Capture durable lessons from debugging, user corrections, missing capabilities, and repeated workflow friction so future sessions avoid the same mistakes. Us...

2· 3k·17 current·22 all-time
byTristan Manchester@tristanmanchester

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tristanmanchester/actual-self-improvement.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Self Improvement (done properly)" (tristanmanchester/actual-self-improvement) from ClawHub.
Skill page: https://clawhub.ai/tristanmanchester/actual-self-improvement
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install actual-self-improvement

ClawHub CLI

Package manager switcher

npx clawhub@latest install actual-self-improvement
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (capture and promote durable learnings) matches the actual artifacts: scripts for initializing/searching/logging .learnings files, templates for Entries, an extractor to scaffold new skills, and hook/activator reminders. No extraneous credentials or unrelated binaries are requested.
Instruction Scope
SKILL.md and scripts focus on locating a workspace root and reading/writing .learnings/ files and promoting entries. This is within scope. Note: the tools will create and modify files under the workspace root (e.g., .learnings/ and, when extracting, skills/<name>), and the error-detector reads environment variables like CLAUDE_TOOL_OUTPUT to detect tool failures — this is expected for the described behavior.
Install Mechanism
Instruction-only with bundled scripts; no install spec, no external downloads, and no package installation. Scripts are local and standalone (Python 3.11+ recommended). This is low-risk and proportionate.
Credentials
The skill requests no credentials, no config paths outside the workspace, and uses only standard env variables for tool output detection (CLAUDE_TOOL_OUTPUT / TOOL_OUTPUT / exit codes). The amount of access (read/write inside a chosen workspace root) is appropriate for a logging/promotion tool.
Persistence & Privilege
always:false (default) and autonomous invocation is allowed (platform default). The skill optionally provides an OpenClaw hook that injects a virtual reminder at bootstrap; that is consistent with the stated purpose but means the skill can be hooked into session bootstrapping to surface reminders. The scripts can write files to the workspace and scaffold new skill directories when invoked — expected but worth noting.
Assessment
This skill appears coherent and focused: it will read and write .learnings files in whatever workspace root you point it at, emit reminders (via the hook or activator), detect command failures via provided environment variables, and can scaffold new skill directories from promoted learnings. There is no network access, no credential requests, and no downloads. Before enabling automatic hooks in your agent, decide whether you want the agent to be allowed to write into the chosen workspace (the scripts will create/append files under .learnings/ and may create a skills/ scaffold when extracting). If you have sensitive files in the same workspace, keep the skill's workspace root narrow or run the scripts manually. If you want stricter guarantees, review any created entries and scaffolds before promoting or publishing them.

Like a lobster shell, security has layers — review code before you run it.

latestvk971b646d7v6na978baxjvpg3d82tykf
3kdownloads
2stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Self-Improvement

Capture, review, promote, and extract durable lessons so future sessions avoid repeating the same mistakes.

Core idea

Use this skill for reusable learning, not for every bump in the road.

A good entry usually has at least one of these properties:

  • It corrected a wrong assumption.
  • It revealed a project-specific convention.
  • It required real debugging or investigation.
  • It is likely to recur.
  • It should change future workflow, memory, or tooling.

Do not log routine noise such as obvious typos, expected validation failures, or errors that were solved immediately with no transferable lesson.

Important path model

There are two different roots in this skill:

  1. Skill root — where bundled resources live:

    • scripts/...
    • references/...
    • assets/...
  2. Workspace root — where the project or active workspace lives:

    • .learnings/LEARNINGS.md
    • .learnings/ERRORS.md
    • .learnings/FEATURE_REQUESTS.md
    • CLAUDE.md, AGENTS.md, .github/copilot-instructions.md, SOUL.md, TOOLS.md

Never write learnings into the installed skill directory. Always target the workspace root.

Quick decision table

SituationWhat to do
User corrects you or updates a factLog a learning
Non-obvious command / API / tool failureLog an error
User asks for a missing capabilityLog a feature request
You discover a reusable workaround or conventionLog a learning
A pattern keeps recurringSearch related entries, link with See Also, and consider promotion
A lesson is broadly applicable or repeatedPromote it into project memory
A resolved, general pattern could help other projectsExtract a new skill

Standard workflow

1) Find the workspace root first

Before reading or writing .learnings/, determine WORKSPACE_ROOT.

Good defaults:

  • the repository root for the current codebase
  • the OpenClaw workspace root
  • the directory containing the files being edited

If unsure, prefer the directory containing .git, AGENTS.md, CLAUDE.md, or the user's active project files.

2) Initialise .learnings/ if needed

Use the helper instead of creating files manually:

python3 scripts/learnings.py init --root /absolute/path/to/workspace

This creates:

  • .learnings/LEARNINGS.md
  • .learnings/ERRORS.md
  • .learnings/FEATURE_REQUESTS.md

3) Review existing learnings before risky or familiar work

Review first when:

  • you are returning to an area with prior failures
  • the task touches infra, CI, deployment, auth, data migration, or generated code
  • the user explicitly says “remember this”, “we hit this before”, or similar

Use the helper:

python3 scripts/learnings.py status --root /absolute/path/to/workspace
python3 scripts/learnings.py search --root /absolute/path/to/workspace --query "pnpm" --limit 5

4) Search before logging to avoid duplicates

Always search for related entries before creating a new one.

python3 scripts/learnings.py search --root /absolute/path/to/workspace --query "keyword or pattern" --limit 10

If a similar entry already exists:

  • prefer linking with See Also
  • reuse or add a stable Pattern-Key for recurring issues
  • bump priority only when recurrence justifies it
  • prefer updating the existing pattern story over spraying near-duplicate entries

5) Log the right kind of entry

Learning

Use for corrections, knowledge gaps, best practices, and durable conventions.

python3 scripts/learnings.py log-learning \
  --root /absolute/path/to/workspace \
  --category correction \
  --priority high \
  --area backend \
  --summary "Project uses pnpm workspaces, not npm" \
  --details "Attempted npm install. Lockfile and workspace config showed pnpm." \
  --suggested-action "Check for pnpm-lock.yaml before assuming npm." \
  --source error \
  --related-files pnpm-lock.yaml pnpm-workspace.yaml \
  --tags package-manager,pnpm

Error

Use for non-obvious failures, exceptions, or tool/API issues worth remembering.

python3 scripts/learnings.py log-error \
  --root /absolute/path/to/workspace \
  --name docker-build \
  --priority high \
  --area infra \
  --summary "Docker build failed on Apple Silicon due to platform mismatch" \
  --error-text "error: failed to solve: no match for platform linux/arm64" \
  --context "docker build -t myapp . on Apple Silicon" \
  --suggested-fix "Retry with --platform linux/amd64 or update base image" \
  --reproducible yes \
  --related-files Dockerfile

Feature request

Use when the user wants a missing capability or a recurring friction point should become a feature.

python3 scripts/learnings.py log-feature \
  --root /absolute/path/to/workspace \
  --capability export-to-csv \
  --priority medium \
  --area backend \
  --summary "User needs report export to CSV" \
  --user-context "Needed for sharing weekly reports with non-technical stakeholders" \
  --complexity-estimate simple \
  --suggested-implementation "Add --output csv alongside existing JSON output" \
  --frequency recurring \
  --related-features analyze-command,json-output

6) Promote proven lessons into memory

Promote when the learning is broad, repeated, or something any future contributor should know.

Common targets:

  • CLAUDE.md — durable project facts and conventions
  • AGENTS.md — workflow rules and automation guidance
  • .github/copilot-instructions.md — shared Copilot context
  • SOUL.md — behavioural principles in OpenClaw workspaces
  • TOOLS.md — tool-specific gotchas in OpenClaw workspaces

Write promotions as short prevention rules, not long incident write-ups.

Example:

  • Bad promotion: “On 2026-03-12 npm failed because…”
  • Good promotion: “Use pnpm install in this repo; it is a pnpm workspace.”

When a learning is promoted, update the original entry’s status to promoted or promoted_to_skill and record the destination.

7) Extract a reusable skill when the pattern is real

Extract a new skill when the solution is:

  • resolved and working
  • broadly useful beyond one file or repo
  • non-obvious enough that future agents would benefit
  • recurring enough to justify its own instructions

Use the helper:

python3 scripts/extract_skill.py \
  --root /absolute/path/to/workspace \
  docker-build-fixes \
  --description "Fix recurring Docker build and platform mismatch issues. Use when Docker builds fail due to architecture, base image, or runtime packaging problems." \
  --from-learning-id LRN-20260313-001 \
  --scaffold-evals

Or keep the old entry point if existing automation already calls it:

bash scripts/extract-skill.sh docker-build-fixes --root /absolute/path/to/workspace --dry-run

Logging rules that matter most

  1. Search first. Duplicate entries are worse than missing tags.
  2. Prefer durable lessons. Only log what should change future behaviour.
  3. Be specific. Name the assumption, failure, or convention clearly.
  4. Include the fix or prevention rule. An entry without next action is weak.
  5. Use stable pattern keys for recurring problems. This lets recurrence compound.
  6. Promote aggressively once a rule is proven. The point is fewer repeat mistakes.
  7. Do not interrupt the user with bookkeeping. Log silently unless the user asked to see it or you need missing details.

Recommended references

Use these only when needed:

  • references/entry-formats.md — full field schemas and manual templates
  • references/examples.md — concrete examples of good entries and promotions
  • references/promotion-and-extraction.md — promotion rules and skill extraction criteria
  • references/platform-setup.md — Claude Code, Codex, Copilot, and OpenClaw setup notes
  • references/evaluation.md — trigger/output eval plan for this skill
  • references/openclaw-integration.md — deeper OpenClaw workflow guidance

Hooks

Hook helpers are intentionally optional.

Available hook scripts:

  • scripts/activator.sh — lightweight reminder at prompt start
  • scripts/error-detector.sh — lightweight error reminder after failed Bash-like commands

Hook configuration examples live in references/platform-setup.md.

What “next-level” looks like for this skill

A mature use of this skill has a loop:

capture → dedupe → promote → extract → evaluate

That means:

  • entries are created with deterministic IDs and consistent fields
  • repeated issues link to each other instead of fragmenting
  • proven rules move into persistent memory files
  • broadly useful fixes become standalone skills
  • the skill itself is tested with trigger and output evals in evals/

Comments

Loading comments...