Install
openclaw skills install reflect-learnSelf-improvement through conversation analysis. Extracts learnings from corrections and success patterns, proposes updates to agent files or creates new skills. Philosophy: "Correct once, never again." Use when: (1) User explicitly corrects behavior ("never do X", "always Y"), (2) Session ending or context compaction, (3) User requests /reflect, (4) Successful pattern worth preserving.
openclaw skills install reflect-learn| Command | Action |
|---|---|
/reflect | Analyze conversation for learnings |
/reflect on | Enable auto-reflection |
/reflect off | Disable auto-reflection |
/reflect status | Show state and metrics |
/reflect review | Review low-confidence learnings |
/reflect [agent] | Focus on specific agent |
"Correct once, never again."
When users correct behavior, those corrections become permanent improvements encoded into the agent system - across all future sessions.
Check and initialize state files using the state manager:
# Check for existing state
python scripts/state_manager.py init
# State directory is configurable via REFLECT_STATE_DIR env var
# Default: ~/.reflect/ (portable) or ~/.claude/session/ (Claude Code)
State includes:
reflect-state.yaml - Toggle state, pending reviewsreflect-metrics.yaml - Aggregate metricslearnings.yaml - Log of all applied learningsUse the signal detector to identify learnings:
python scripts/signal_detector.py --input conversation.txt
| Confidence | Triggers | Examples |
|---|---|---|
| HIGH | Explicit corrections | "never", "always", "wrong", "stop", "the rule is" |
| MEDIUM | Approved approaches | "perfect", "exactly", accepted output |
| LOW | Observations | Patterns that worked, not validated |
See signal_patterns.md for full detection rules.
Map each signal to the appropriate target:
Learning Categories:
| Category | Target Files |
|---|---|
| Code Style | code-reviewer, backend-developer, frontend-developer |
| Architecture | solution-architect, api-architect, architecture-reviewer |
| Process | CLAUDE.md, orchestrator agents |
| Domain | Domain-specific agents, CLAUDE.md |
| Tools | CLAUDE.md, relevant specialists |
| New Skill | .claude/skills/{name}/SKILL.md |
See agent_mappings.md for mapping rules.
Some learnings should become new skills rather than agent updates:
Skill-Worthy Criteria:
Quality Gates (must pass all):
See skill_template.md for skill creation guidelines.
Produce output in this format:
# Reflection Analysis
## Session Context
- **Date**: [timestamp]
- **Messages Analyzed**: [count]
- **Focus**: [all agents OR specific agent name]
## Signals Detected
| # | Signal | Confidence | Source Quote | Category |
|---|--------|------------|--------------|----------|
| 1 | [learning] | HIGH | "[exact words]" | Code Style |
| 2 | [learning] | MEDIUM | "[context]" | Architecture |
## Proposed Agent Updates
### Change 1: Update [agent-name]
**Target**: `[file path]`
**Section**: [section name]
**Confidence**: [HIGH/MEDIUM/LOW]
**Rationale**: [why this change]
```diff
--- a/path/to/agent.md
+++ b/path/to/agent.md
@@ -82,6 +82,7 @@
## Section
* Existing rule
+* New rule from learning
Quality Gate Check:
Will create: .claude/skills/[skill-name]/SKILL.md
reflect: add learnings from session [date]
Agent updates:
- [learning 1 summary]
New skills:
- [skill-name]: [brief description]
Extracted: [N] signals ([H] high, [M] medium, [L] low confidence)
Apply these changes?
Y - Apply all changes and commitN - Discard all changesmodify - Adjust specific changes1,3 - Apply only changes 1 and 3s1 - Apply only skill 1all-skills - Apply all skills, skip agent updates
### Step 6: Handle User Response
**On `Y` (approve):**
1. Apply each change using Edit tool
2. Run `git add` on modified files
3. Commit with generated message
4. Update learnings log
5. Update metrics
**On `N` (reject):**
1. Discard proposed changes
2. Log rejection for analysis
3. Ask if user wants to modify any signals
**On `modify`:**
1. Present each change individually
2. Allow editing the proposed addition
3. Reconfirm before applying
**On selective (e.g., `1,3`):**
1. Apply only specified changes
2. Log partial acceptance
3. Commit only applied changes
### Step 7: Update Metrics
```bash
python scripts/metrics_updater.py --accepted 3 --rejected 1 --confidence high:2,medium:1
/reflect on
# Sets auto_reflect: true in state file
# Will trigger on PreCompact hook
/reflect off
# Sets auto_reflect: false in state file
/reflect status
# Shows current state and metrics
/reflect review
# Shows low-confidence learnings awaiting validation
Project-level (versioned with repo):
.claude/reflections/YYYY-MM-DD_HH-MM-SS.md - Full reflection.claude/reflections/index.md - Project summary.claude/skills/{name}/SKILL.md - New skillsGlobal (user-level):
~/.claude/reflections/by-project/{project}/ - Cross-project~/.claude/reflections/by-agent/{agent}/learnings.md - Per-agent~/.claude/reflections/index.md - Global summarySome learnings belong in auto-memory (~/.claude/projects/*/memory/MEMORY.md) rather than agent files:
| Learning Type | Best Target |
|---|---|
| Behavioral correction ("always do X") | Agent file |
| Project-specific pattern | MEMORY.md |
| Recurring bug/workaround | New skill OR MEMORY.md |
| Tool preference | CLAUDE.md |
| Domain knowledge | MEMORY.md or compound-docs |
When a signal is LOW confidence and project-specific, prefer writing to MEMORY.md over modifying agents.
git revertIf auto-reflection is enabled, PreCompact hook triggers reflection before handover.
At 70%+ context (Yellow status), reminders to run /reflect are injected.
The skill includes hook scripts for automatic integration:
# Install hook to your Claude hooks directory
cp hooks/precompact_reflect.py ~/.claude/hooks/
Configure in ~/.claude/settings.json:
{
"hooks": {
"PreCompact": [
{
"hooks": [
{
"type": "command",
"command": "uv run ~/.claude/hooks/precompact_reflect.py --auto"
}
]
}
]
}
}
See hooks/README.md for full configuration options.
This skill works with any LLM tool that supports:
# Set custom state directory
export REFLECT_STATE_DIR=/path/to/state
# Or use default
# ~/.reflect/ (portable default)
# ~/.claude/session/ (Claude Code default)
Unlike the previous agent-based approach, this skill executes directly without spawning subagents. The LLM reads SKILL.md and follows the workflow.
Commits are wrapped with availability checks - if not in a git repo, changes are still saved but not committed.
No signals detected:
/reflect review to check pending itemsConflict warning:
Agent file not found:
/reflect status to see available targetsreflect/
├── SKILL.md # This file
├── scripts/
│ ├── state_manager.py # State file CRUD
│ ├── signal_detector.py # Pattern matching
│ ├── metrics_updater.py # Metrics aggregation
│ └── output_generator.py # Reflection file & index generation
├── hooks/
│ ├── precompact_reflect.py # PreCompact hook integration
│ ├── settings-snippet.json # Settings.json examples
│ └── README.md # Hook configuration guide
├── references/
│ ├── signal_patterns.md # Detection rules
│ ├── agent_mappings.md # Target mappings
│ └── skill_template.md # Skill generation
└── assets/
├── reflection_template.md # Output template
└── learnings_schema.yaml # Schema definition