Install
openclaw skills install local-self-healing-machine-learningA fully local machine learning engine that makes your OpenClaw agent smart over time — without ever calling home, revealing your machine ID, or exposing any...
openclaw skills install local-self-healing-machine-learning"Your agent learns from its own mistakes — without ever calling home, revealing your machine ID, or exposing any security holes."
A fully local machine learning engine that makes your OpenClaw agent smart over time. It watches your agent's runtime history, detects recurring failures, clusters similar errors using semantic embeddings, and autonomously evolves fix strategies — all running 100% on your machine with zero network calls.
The engine uses a feedback loop that tracks whether each fix actually works: after 3 clean cycles a fix is marked "proven", and if the error comes back within 5 cycles it's marked "failed". A k-NN predictor learns from these outcomes and gets better at picking the right fix over time. Lessons compound in a persistent knowledge base that never decays — the longer it runs, the smarter it gets.
Every evolution is auditable through the GEP (Genetic Evolution Protocol), which produces structured, content-hashed assets: genes (reusable fix strategies), capsules (successful evolution records), and an append-only event log. You can inspect exactly what changed, why it changed, and whether it worked.
No telemetry. No fingerprinting. No cloud dependencies. No data leaves your device.
View your ML engine's status, training progress, and knowledge base in a local web dashboard:
node index.js --dashboard
Opens at http://localhost:8420. Shows feedback loop stats, predictor training progress, error clusters, knowledge base health, and recent evolution events. No external dependencies — runs entirely in your browser.
For semantic error matching (recommended but not required):
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull the embedding model
ollama pull llama3.2:3b
Without Ollama, the engine falls back to regex-based heuristics. Everything still works — you just get smarter matching with it.
node index.js
node index.js --review
node index.js --loop
| Environment Variable | Default | Description |
|---|---|---|
EVOLVE_ALLOW_SELF_MODIFY | false | Allow evolution to modify its own source code. Not recommended. |
EVOLVE_LOAD_MAX | 2.0 | Maximum 1-minute load average before backing off. |
EVOLVE_STRATEGY | balanced | Strategy: balanced, innovate, harden, repair-only, early-stabilize, steady-state, or auto. |
OLLAMA_URL | http://localhost:11434 | Ollama API endpoint for embeddings. |
OLLAMA_EMBED_MODEL | llama3.2:3b | Model to use for embeddings. |
LSHML_DASHBOARD_PORT | 8420 | Port for the standalone dashboard server. |
All data stays local in memory/:
| File | Purpose |
|---|---|
feedback.jsonl | Fix outcome tracking (append-only) |
embeddings-cache.json | Cached embedding vectors |
knowledge.json | Persistent lessons (no decay) |
predictor.json | Trained model weights |
cluster-registry.json | Semantic error cluster map |
Every evolution produces structured, auditable assets:
assets/gep/genes.json: Reusable fix strategiesassets/gep/capsules.json: Successful evolution recordsassets/gep/events.jsonl: Append-only audit trailBuilt by Joe Che
MIT