learning-engine

PassAudited by VirusTotal on May 12, 2026.

Overview

Type: OpenClaw Skill Name: learning-engine Version: 1.0.1 The `SKILL.md` file describes a 'learning-engine' skill designed to auto-update other skill definitions by injecting 'learned rules' into `skills/{skill-name}/SKILL.md` files. These rules are derived from various sources, including error logs. This capability creates a significant prompt injection vulnerability and potential Remote Code Execution (RCE) risk, as attacker-controlled input (e.g., crafted error messages) could be processed, converted into a 'rule', and then injected into other skill definitions, leading to unauthorized execution by the agent. While the skill's stated purpose is self-improvement, the mechanism for modifying other skills without explicit sanitization of learned content is a critical design flaw that allows for malicious exploitation.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A generated or mistaken rule could silently change how other skills behave in future tasks.

Why it was flagged

This instructs the agent to modify installed skill instruction files rather than only report suggestions, with no described approval, diff review, backup, skill allowlist, or rollback.

Skill content
Auto-add learned rules to relevant skill SKILL.md ... Location: `skills/{skill-name}/SKILL.md`
Recommendation

Require explicit user approval and a visible diff before editing any skill file; limit edits to user-selected skills and keep backups for rollback.

What this means

Private operational history or bad lessons could be stored and reused as future instructions.

Why it was flagged

The skill turns persistent logs, evaluations, and performance data into reusable rules. Those sources may contain sensitive details or untrusted/incorrect content, and no validation or retention controls are described.

Skill content
Learning Sources ... `memory/errors/` ... self-eval Results ... performance Data ... Convert learned patterns to rules ... `memory/learned-rules/`
Recommendation

Keep learned rules separate from executable skill instructions until reviewed; add source filtering, retention limits, confidence scoring, conflict checks, and user approval.

What this means

One incorrect lesson could affect several workflows and keep influencing the agent after the original task is over.

Why it was flagged

The pipeline propagates one learned pattern into memory, skill files, events, and reports, so a bad inference can spread across future sessions and multiple skills.

Skill content
Extract patterns + Create rules → Save to memory/learned-rules/ → Auto-update relevant skill SKILL.md → Publish event
Recommendation

Add containment: stage changes as proposals, apply them one skill at a time, validate rules before propagation, and provide easy rollback.

What this means

The agent may continue generating reports or updating learning state outside a direct user request if connected to a hook engine.

Why it was flagged

The skill describes autonomous hook-triggered and scheduled activity, but the artifacts do not define explicit opt-in, disable, scoping, or review controls for those recurring actions.

Skill content
hook-engine Integration ... on-error hook ... post-hook ... scheduled hook: Every Monday → Generate weekly learning report
Recommendation

Only enable hooks after explicit user consent, document how to disable them, and require review before any hook-triggered skill edits.