Feedback Learning V2
Analysis
The skill appears locally focused and purpose-aligned, but it creates persistent learned rules and logs command errors that can influence future agents, so it should be reviewed before installation.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
Set up crons ... payload: python3 ~/.openclaw/shared/learning/analyze-patterns.py ... payload: python3 ~/.openclaw/shared/learning/weekly-report.py
The skill documents scheduled background processing and an optional PostToolUse hook; this is disclosed and purpose-aligned, but it means the system keeps operating after initial setup.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
auto-promotes structured rules ... Before tasks: check `$FEEDBACK_LEARNING_DIR/genes.json` for applicable rules.
The skill persists generated rules and tells future agents to consult them before tasks, so feedback-derived content can become reusable agent context.
STDERR="${TOOL_STDERR:-unknown error}" ... STDERR="${STDERR:0:200}" ... CONTEXT="${TOOL_COMMAND:-exec command}" ... bash "$DIR/log-event.sh" ... "$CONTEXT" "$STDERR"When the optional hook is enabled, failed command text and stderr are persisted into the learning log; the code truncates stderr but does not redact secrets.
