Learning Evolution

Track, analyze, and evolve learning patterns from verified skill usage, user feedback, and observed outcomes. Use when planning skill improvements from real evidence.

Audits

Pass

Install

openclaw skills install learning-evolution

Learning Evolution

Overview

Use this skill to structure evidence-based learning reviews for OpenClaw skills. It creates local Markdown templates for usage analysis, effectiveness review, and evolution planning. The bundled scripts leave analytics, confidence scores, outcomes, and user feedback blank until you fill them from verified logs, direct feedback, or observed outcomes.

When to Use

  • Reviewing real usage patterns for a skill
  • Turning user feedback into improvement candidates
  • Tracking whether a published change improved outcomes
  • Planning a skill update with an explicit human review gate
  • Avoiding changes based only on guesses or anecdotal impressions

Data Rules

  • Do not treat blank template fields as measured facts.
  • Do not invent usage, satisfaction, completion, or error metrics.
  • Keep sensitive user feedback out of persisted reports unless retention is intentional.
  • Verify the source of every metric before using it to change a published skill.
  • Require human review before applying recommendations generated from reports.

Commands

Analyze Usage Patterns

./scripts/analyze-usage.sh --skill <name> --period 30d

Creates data/USAGE-<skill>-YYYYMMDD.md with fields for verified usage counts, completion signals, drop-off points, and evidence-backed insights.

Track Effectiveness

./scripts/track-effectiveness.sh --skill <name> --since 2024-01-01

Creates data/EFFECTIVENESS-<skill>-YYYYMMDD.md with fields for verified success metrics, error analysis, and review notes.

Suggest Evolutions

./scripts/suggest-evolutions.sh --skill <name> --min-confidence 0.7

Creates data/EVOLUTIONS-<skill>-YYYYMMDD.md with an evidence checklist and candidate table. It does not assign confidence values automatically.

Input

Use real evidence such as:

  • Skill usage logs
  • User feedback and ratings
  • Reproducible error reports
  • Success or failure outcomes
  • Version comparison notes
  • Manual review observations

Output

The scripts create local Markdown templates under data/ or LEARNING_DATA_DIR:

  • Usage analysis template
  • Effectiveness review template
  • Evolution planning template

Each output includes a data policy reminder so agents and users do not confuse unfilled fields with measured analytics.

Review Workflow

  1. Run the relevant script to create a dated template.
  2. Fill in only verified facts and cite the source of each metric.
  3. Mark unknown values as TODO until evidence exists.
  4. Decide whether the evidence supports an incremental fix, a larger evolution, a pivot, or no change.
  5. Review the proposed change with a human before modifying or publishing another skill.

Safety Notes

  • Scripts validate --skill before using it in report filenames.
  • Scripts write only local Markdown files.
  • Scripts do not access the network, read credentials, or modify other skills.
  • LEARNING_DATA_DIR can redirect report output; check it before running if you need a specific destination.