Self Improving Agent

PassAudited by VirusTotal on May 11, 2026.

Overview

Type: OpenClaw Skill Name: self-improver Version: 3.2.1 The bundle implements a 'self-improving' system that dynamically loads and executes arbitrary Python code from a hooks directory using importlib (`src/hooks.py`), which presents a significant risk of Remote Code Execution (RCE) if the environment is tampered with. Additionally, the agent automatically captures and stores sensitive execution data, including full stack tracebacks and session metadata, into local JSON files (`hooks/error_learning.py`, `hooks/session_learning.py`). While these capabilities are consistent with the stated purpose of a learning agent and no active data exfiltration was found, the combination of arbitrary code execution and sensitive data collection is inherently high-risk.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Private or sensitive interaction details could be retained locally and influence future sessions, and a bad or manipulated learning could persist across tasks.

Why it was flagged

The skill explicitly stores derived conversation and user-preference data as persistent memory, which can later be reused by the agent.

Skill content
Session Learning ... Learns from: Conversation patterns, user preferences ... Stored in: `learnings/sessions.json`
Recommendation

Disable or gate auto-learning/auto-apply by default, review stored learnings before reuse, sanitize error/session data, and provide clear retention and deletion controls.

What this means

A mistaken, low-quality, or poisoned learning from one interaction could affect later work until the user finds and removes it.

Why it was flagged

The recommended automated workflow can carry a learned behavior from one session into future startup and session flows without a clearly documented containment or review step.

Skill content
"auto_learn": true, "auto_apply": true, "learn_after_session": true, "apply_on_startup": true
Recommendation

Require explicit user approval before applying new learnings, add rollback/reset controls, and separate experimental learnings from trusted production behavior.

What this means

Hook files can change agent behavior and, depending on implementation, may execute Python code during normal runs.

Why it was flagged

The hook system is purpose-aligned, but auto-loading and applying hook code from a workspace path is a powerful mechanism that users should inspect before enabling.

Skill content
hooks.apply_all()  # Apply all hooks ... "auto_load": true, "custom_hooks_path": "./hooks"
Recommendation

Inspect all hook files, avoid enabling untrusted custom hook paths, and prefer an allowlist or approval prompt before hooks run automatically.

What this means

Users may have to rely on an external repository or local code review to know exactly what is being installed and run.

Why it was flagged

The registry metadata does not provide a clear trusted source or install contract, while the artifacts include code and documentation for cloning/installing a Python package.

Skill content
Source: unknown; Homepage: none; Install specifications: No install spec
Recommendation

Install only from a trusted, reviewed source; verify the repository and dependencies; and prefer a declared, reproducible install specification.