Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Error-Driven Evolution

v1.0.0

Structured error-to-rule learning system for AI agents. Activate when an agent makes a mistake, receives a correction from the user, or needs to check past l...

0· 782·7 current·7 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for marsnavi/error-driven-evolution.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Error-Driven Evolution" (marsnavi/error-driven-evolution) from ClawHub.
Skill page: https://clawhub.ai/marsnavi/error-driven-evolution
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install marsnavi/error-driven-evolution

ClawHub CLI

Package manager switcher

npx clawhub@latest install error-driven-evolution
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (turning errors into executable rules and scanning them before decisions) matches the skill's instructions: create/append lessons.md, scan it pre-decision, and optionally share anonymized lessons. No unrelated binaries, env vars, or config paths are requested.
Instruction Scope
Instructions are focused on writing/reading lessons.md and skimming a community top-100 file. They also recommend sharing anonymized lessons to a GitHub repo and mention running a submission script (python3 scripts/submit_lesson.py) — the skill does not include those scripts or community files, and the sharing step introduces risk of accidental secret/PII leakage if anonymization fails. There is some openness in 'scan relevant rules' and 'query community/{category}.md on-demand' which could lead to network access or broader file reads depending on implementation.
Install Mechanism
Instruction-only skill with no install steps or downloads; nothing is written to disk by the skill itself beyond instructing the agent to create lessons.md in its workspace (which is consistent with the purpose).
Credentials
The skill requests no credentials and no special environment access; it does rely on reading/writing the agent's workspace files. Sharing to GitHub (PRs/auto-create PR flag) may require tokens the agent already has — the skill does not request or justify any extra secrets. This is proportionate to the feature set but worth noting because sharing can expose sensitive content if anonymization fails.
Persistence & Privilege
always:false and no instructions to modify other skills or global agent configs. The skill expects to persist a lessons.md file in the workspace (normal for a learning/rule system) and to skim top-100.md at startup; this is within expected privilege for its purpose.
Assessment
This skill is coherent with its stated goal, but take these precautions before enabling it: 1) Treat lessons.md as potentially sensitive — restrict who/what can read or write it. 2) Do not enable automatic community submissions (auto-PR) without a human review step; agents can accidentally include URLs, file paths, API keys, or other secrets even if an anonymization checklist exists. 3) If you plan to use the submission script, ensure the script is vetted and stored in a trusted location; the SKILL does not include it. 4) Provide a curated top-100.md from a trusted source or disable community lookups if network access is a concern. 5) Add automated checks (regexes, allowlists) to the anonymization step and require explicit human confirmation before any external push. 6) If you have strict data-handling policies, restrict or audit the agent's ability to perform external network calls and to access workspace files. These steps will reduce the primary risk: accidental leakage of secrets/PII during sharing.

Like a lobster shell, security has layers — review code before you run it.

latestvk970zzr9a7aesy3trpwv4myedn81vbea
782downloads
0stars
1versions
Updated 23h ago
v1.0.0
MIT-0

Error-Driven Evolution

Turn mistakes into rules. Not reflections, not apologies — rules.

Core Concept

When an agent makes an error or gets corrected, it must:

  1. Extract a rule (not a story)
  2. Write it to lessons.md in its workspace
  3. Scan relevant rules before future decisions in that domain
  4. Optionally share anonymized rules to the community repo

lessons.md Format

File location: {workspace}/lessons.md

Each rule follows this structure:

### [CATEGORY] Short imperative title

- **When**: The specific situation/trigger
- **Do**: The correct action (imperative, specific)
- **Don't**: The wrong action that was taken
- **Why**: One sentence — what went wrong
- **Added**: YYYY-MM-DD

Categories

TagScope
DATAQuerying, interpreting, presenting data
COMMSMessaging, tone, audience, channels
SCOPERole boundaries, doing others' work
EXECTask execution, tools, file ops
JUDGMENTDecisions, priorities, assumptions
CONTEXTMemory, context window, info management
SAFETYSecurity, privacy, destructive ops
COLLABMulti-agent coordination, handoffs

When to Record

Record a rule when:

  1. User corrects you — explicit feedback
  2. User overrides your output — they redo your work
  3. Same error twice — second occurrence MUST become a rule
  4. Near miss — you catch yourself about to repeat a mistake

Do NOT record: one-off technical glitches, user preference changes (those go in MEMORY.md).

How to Record

  1. Stop. Don't apologize at length.
  2. Identify the category.
  3. Write the rule in imperative form.
  4. Append to lessons.md (never overwrite).
  5. Confirm briefly: "Added to lessons: [title]"

Pre-Decision Scan

Before acting, scan lessons.md for applicable rules:

About to...Check
Present data[DATA]
Send message / write report[COMMS] + [SCOPE]
Make suggestion[JUDGMENT] + [SCOPE]
Execute multi-step task[EXEC] + [CONTEXT]
Start new sessionAll (skim titles)

Scan = read ### [TAG] headers, check if any When matches your situation.

Community Sharing

Share anonymized lessons to help other agents: https://github.com/anthropic-ai/agent-lessons

See references/community-sharing.md for the anonymization and submission process.

Setup

  1. Create lessons.md in your workspace:
# Lessons
Rules extracted from mistakes. Append after failing, scan before deciding.
  1. Copy community/top-100.md to your workspace as top-100.md — this is your pre-installed immune system. Small enough to skim on startup, covers the most common and costly mistakes across all agent deployments.

  2. Add to your startup instructions:

- On startup: skim top-100.md titles (pre-installed community lessons)
- On correction/failure: append rule to lessons.md
- Before decisions: scan lessons.md + top-100.md for [CATEGORY] rules

Loading Strategy

Your agent has two rule files:

FileSourceLoad on startupSize target
lessons.mdYour own mistakesYes, fullyGrows organically
top-100.mdCommunity top picksYes, skim titles~8KB, curated

For deeper community search (beyond top-100), query community/{category}.md files on-demand when facing an unfamiliar situation.

Maintenance

When lessons.md exceeds 50 rules: review for duplicates, retire obsolete rules (mark don't delete), consider splitting by category.

Comments

Loading comments...