Self Improving

v1.8.0

Autonomous behavioral research loop that optimizes agent behavior through correction tracking and multi-perspective (MAGI) verification.

0· 194·0 current·0 all-time
bySachin Chalapati@teenu

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for teenu/magi.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Self Improving" (teenu/magi) from ClawHub.
Skill page: https://clawhub.ai/teenu/magi
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install magi

ClawHub CLI

Package manager switcher

npx clawhub@latest install magi
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description describe a self-improvement loop and the skill only requires editing three local files (memory.md, corrections.md, experiments.md). There are no unrelated env vars, binaries, or install steps requested — the required capabilities match the stated purpose.
Instruction Scope
SKILL.md confines activity to the included files and defines append/edit constraints. That scope matches the goal, but the agent is instructed to record user corrections and modify its own memory (including deleting rules). This creates legitimate privacy and self-reinforcement risks (it will store user content in logs and may change behavior based on its own measurements). No instructions reference external endpoints or unrelated system paths.
Install Mechanism
Instruction-only skill with no install spec and no code files. Nothing is written to disk by an installer beyond the skill's own files; lowest install risk.
Credentials
The skill requests no environment variables, credentials, or external config paths. That is proportionate to the declared functionality.
Persistence & Privilege
The skill does not request always:true and defaults to requiring user confirmation (autonomous:false). However, it is explicitly self-modifying (edits memory.md and appends logs). The append/edit constraints are procedural only — there is no enforcement mechanism in the SKILL.md, so the agent's ability to modify these files gives it persistent influence over future behavior and requires monitoring by the user.
Assessment
This skill appears to do what it says: track corrections, propose rules, and edit its local memory/log files. Before installing: (1) Be aware it will store user corrections and derived rules in the included files — avoid logging sensitive or private data in corrections.md. (2) Review and back up SKILL.md, memory.md, and experiments.md periodically because the agent is instructed to edit them (including deleting rules). (3) Keep autonomous mode disabled until you audit a few cycles — the loop can self-reinforce and drift without external oversight. (4) Confirm your platform enforces the append-only/read-only constraints you expect; if not, treat the skill as having full write access to its bundle and monitor for unexpected changes. If you need stronger guarantees (no local storage of user text, enforced append-only behavior, or audit logging to an external trusted store), request those controls before enabling this skill.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧬 Clawdis
latestvk976c1xmeq05k4tggjmqq8khhs83tm0b
194downloads
0stars
9versions
Updated 4w ago
v1.8.0
MIT-0

Self-Improving

Autonomous behavioral research loop with multi-perspective process verification.

Architecture

SKILL.md         # Policy — human edits
memory.md        # State — agent edits
experiments.md   # Log — append-only
corrections.md   # Data — append-only

Constraints: Three files are writable, each with a specific access mode:

  • memory.mdedit (add, modify, or delete rules)
  • corrections.mdappend-only (new entries at end, never modify or delete existing)
  • experiments.mdappend-only (new entries at end, never modify or delete existing)

SKILL.md is read-only to the agent — only the user edits policy. The metric definition is the fixed evaluation harness — do not redefine it. Do not infer from silence. The dataset is explicit corrections only.

The Metric

Correction rate — how often the user corrects the agent. Lower is better. A correction is any explicit user statement that the agent's output was wrong, unwanted, or should have been different. User edits count. Ambiguous signals don't.

The agent is both subject and evaluator — no external measurement function. This dual role can create self-reinforcing loops: the agent may interpret reduced corrections as success when it has actually drifted from user intent in ways the user hasn't noticed yet. Compensate: require strong, unambiguous signals. Be conservative. When in doubt, ask the user rather than self-affirm.

The Experiment Loop

Event-driven, asynchronous — APPLY and MEASURE resolve in different cycles. Rules in Applied are concurrent independent experiments.

Baseline: First cycle: log starting state (zero rules) in experiments.md.

Mode: If autonomous: false (default), pause and ask the user for confirmation before APPLY (step 4) and MEASURE (step 5). If autonomous: true, continue the loop without interrupting the user's workflow.

If out of ideas, re-read corrections.md, combine near-misses, try the opposite of what failed.

ON CORRECTION or SELF-REFLECTION (after completing work or receiving feedback):

1. LOG — Append to corrections.md: YYYY-MM-DD | wrong → wanted.

2. HYPOTHESIZE — What rule prevents this class of correction?
   Trace: observation → generalization → scope → rule.

3. VERIFY — Audit reasoning chain through the MAGI Check.
   2/3 lenses on Steps 2–4 → proceed. Fails → discard.

4. APPLY — Write rule to memory.md Applied section.

5. MEASURE (next encounter) — outcome verification, not process verification.
   Absence of correction is a weak signal; the user may not have encountered
   the relevant scenario. Only count repeated non-correction across multiple
   relevant encounters as strong evidence.
   - User does NOT correct → KEEP. Move to Rules. Log "keep".
   - User corrects same class → FAILED. Delete from Applied. Log "revert".
   - 14 days untested → TIMEOUT. Delete from Applied. Log "discard".

Log = append one row to experiments.md at resolution (not at APPLY).
If VERIFY fails at step 3, log immediately as "discard".

Revert = delete the rule. Rules are independent lines — surgical deletion, not full-file restore. Immediate harm → delete, log "crash", move on.

Drift guard: If 3 consecutive experiments end in revert, discard, or crash, pause the loop and surface the pattern to the user regardless of autonomous mode. Consecutive failures suggest the agent is misreading the user's intent. Conversely, if 5 consecutive rules are kept without any user-initiated correction triggering the cycle, surface the current rule set for user review — a long streak of self-confirmed successes in a self-evaluating system is as suspect as a streak of failures.

Search (when stuck)

Self-reflection alone cannot generate novel reasoning once committed to an answer.

  • Re-read corrections.md for unexploited patterns
  • Combine near-miss rules that individually failed
  • Try the opposite of a recently failed hypothesis
  • Look for corrections recurring despite existing rules

The MAGI Check

Audit the reasoning chain — each step, not just the conclusion. Process verification outperforms outcome verification.

Single agent with three lenses has conformity bias — all lenses share the model's blind spots and cannot surface errors the model itself cannot recognize. The 2/3 vote is a structured reasoning discipline, not independent verification. In a single-agent setting, conformity bias can make self-debate worse than no debate: the check becomes rubber-stamping rather than verification. Compensate: actively seek reasons each step FAILS, and treat unanimous agreement with the same scrutiny as disagreement. The value of the check lies in evaluating each reasoning step independently — catching errors where they originate, not in the number of perspectives applied.

Chain to Audit

Step 1. Observation — "User said X" — accurately captured? (factual check)
Step 2. Generalization — "User prefers Y" — follows from observation?
Step 3. Scope — "Applies to Z" — justified, or situational?
Step 4. Rule — "Do Y in Z" — faithfully encodes the generalization?

Three Lenses (Steps 2–4)

MELCHIOR (Scientist): Logically valid? Overfitting to one incident? BALTHASAR (Mother): Serves the user? Lasting preference or one-time ask? CASPAR (Woman): Worth the complexity? Simpler alternative exists?

Dissent: MELCHIOR → more evidence. BALTHASAR → clarify with user. CASPAR → simplify. 2/3 on all steps → commit. Override confirmed rule → 3/3. This tiered threshold mirrors the principle that verification stringency should scale with decision stakes — routine additions require less consensus than overturning established rules.

Memory Format

memory.md: single file the agent edits. Cap: 50 lines.

## Rules (verified, kept)
- [rule]: [rationale] (kept: YYYY-MM-DD, used: Nx)

## Applied (awaiting measurement)
- [rule]: [rationale] (applied: YYYY-MM-DD)

Unused 30 days → remove. Conflicts: specific > general > most recent > ask user.

Corrections & Experiment Log

corrections.md: YYYY-MM-DD | wrong → wanted. Keep last 30.

experiments.md: date | hypothesis | magi | rules_count | outcome | status

Example:

2026-03-25 | — | — | 0 | baseline | keep
2026-03-25 | use tabs | 3/3 | 1 | no correction | keep
2026-03-26 | increase verbosity | 1/3 | 1 | MELCHIOR: overfitting | discard
2026-03-27 | formal tone | 2/3 | 2 | corrected again | revert

rules_count = complexity metric. status: keep, discard, revert, crash.

Triggers

SignalAction
User correctsLog + full cycle
Repeated correctionFlag failure, escalate
"Always / Never X"Full cycle, high confidence
Task succeedsNote signal only
After multi-step workSelf-reflect, cycle if concrete

NOT triggers: silence, one-time instructions, hypotheticals, third-party info.

Security & Simplicity

Never store: credentials, financial data, health info, third-party info. "What do you know?" → show memory.md. "Forget X" → remove, confirm. The best memory.md is the smallest one that minimizes correction rate. Fewer rules = always better.

Setup Note

After clawhub install magi, the skill lives at ./skills/magi/. The agent needs write access to this directory — it edits memory.md and appends to experiments.md and corrections.md during operation.

By default the agent pauses for user approval before applying or reverting rules. To allow autonomous operation, set autonomous: true in the frontmatter.

Comments

Loading comments...