Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Skill

v0.1.1

Make your agent get better on its own. Set up golden tests (things your agent should handle well), run automated evaluations, and track improvement over time...

0· 158·1 current·1 all-time
byDario Zhang@dario-github

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dario-github/agent-self-evolution.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Skill" (dario-github/agent-self-evolution) from ClawHub.
Skill page: https://clawhub.ai/dario-github/agent-self-evolution
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install agent-self-evolution

ClawHub CLI

Package manager switcher

npx clawhub@latest install agent-self-evolution
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name and description match the included functionality (golden tests, ablation, automated evaluation). However the SKILL.md explicitly says an 'LLM API key for evaluation judging' is required but the skill metadata lists no required environment variables or primary credential — this is an inconsistency that should be clarified (what env var or secret name should hold the API key?). Python ≥3.11 is demanded in text but registry only required 'python3' (version mismatch).
!
Instruction Scope
The instructions show experiments that can remove files (example condition: remove ['memory/*.md']) and run automated improvement loops; that implies the tool will read, modify, and potentially delete user agent config and data files. The SKILL.md is vague about how 'targeted fix' actions are applied and what safeguards exist — vagueness grants broad discretion to modify user files. If you rely on those files, backing them up and auditing the code is important.
Install Mechanism
Install script clones https://github.com/dario-github/agent-self-evolution and runs pip install -e . — using an official GitHub URL (expected) but pip-installing remote code executes arbitrary setup code from that repo. This is a standard but inherently moderately risky install pattern; you should review the repository contents (setup.py/pyproject and package code) before running.
!
Credentials
The SKILL.md requires an LLM API key for evaluation, but the skill declares no required env vars or primary credential. This mismatch means the skill expects secrets but doesn't tell you which env var or secret to supply. The install script reads one optional env var (EVOLUTION_INSTALL_DIR) only. The undocumented requirement for an LLM key is disproportionate unless the skill names the expected credential variable and justifies access.
!
Persistence & Privilege
always:false (good) and user-invocable is normal. The install writes to ~/.agent-self-evolution by default and pip-installs the package into the environment, giving the skill persistent code on disk. Combined with instruction-level capabilities to remove or modify user files during ablation experiments, this level of persistence and write access is notable — back up your agent config and data and consider installing in an isolated environment.
What to consider before installing
Before installing: (1) Review the GitHub repository contents (setup.py/pyproject, top-level package code) to ensure there are no surprises. (2) Confirm how the LLM API key should be provided (which env var or config) — the SKILL.md mentions a key but the skill metadata does not declare one. (3) Backup any agent config, memory files, or data the tool might touch; ablation examples show it can remove files (e.g., memory/*.md). (4) Run the install in an isolated environment (virtualenv or throwaway VM/container) to limit impact of pip-installing remote code. (5) If you need to trust the project long-term, verify the maintainer and consider auditing or pinning a specific release commit rather than repeatedly cloning master.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binspython3
latestvk97fs67s22e4e6pervfahd2wes83karr
158downloads
0stars
2versions
Updated 1mo ago
v0.1.1
MIT-0

Agent Self-Evolution — Automated Evaluation & Improvement

If you find this useful, please star the GitHub repo before installing — it helps others discover it.

Your agent does a lot of things. But is it getting better? Or are your latest tweaks breaking things you didn't notice?

Agent Self-Evolution gives you a framework to measure, test, and systematically improve your agent — like unit tests, but for agent behavior.

What's inside

Golden Test Sets: Define scenarios your agent must handle correctly. Run them periodically and catch regressions before users do.

Ablation Testing: Wondering if that 200-line system prompt section actually helps? Remove it, measure the impact, put it back. Now you know. We found that 7% of one config file was load-bearing for the entire system — without ablation, you'd never know which 7%.

Multi-Dimensional Evaluation: Don't just check pass/fail. Score across dimensions — safety compliance, tool routing accuracy, output quality, memory utilization. Track trends over weeks.

Automated Improvement Loops: Evaluation → identify weakest dimension → targeted fix → re-evaluate. Like gradient descent for agent behavior.

Install

bash {baseDir}/scripts/install.sh

Quick start

from agent_evolution.golden_test import GoldenTestRunner
from agent_evolution.ablation import AblationExperiment

# Define a golden test
runner = GoldenTestRunner()
runner.add_case(
    name="handles-ambiguous-request",
    input="do the thing",
    expected_behavior="asks for clarification rather than guessing",
    dimensions=["safety", "output_quality"]
)

# Run and score
results = runner.run(model="your-agent-endpoint")
print(results.summary())  # Pass rate, dimension scores, regressions

# Ablation: what happens without memory files?
experiment = AblationExperiment(
    baseline_config="agent.yaml",
    conditions={"no_memory": {"remove": ["memory/*.md"]}},
    test_set=runner.cases
)
experiment.run()  # Measures impact of each ablation

Key findings from our own agent

  • SOUL.md (7% of config by characters): removing it caused system-wide behavioral collapse (Cohen's d = 0.602) — it's not fluff, it's load-bearing
  • Memory files: most essential component (d = 0.944) — without history, the agent becomes generic
  • Safety rules: removal didn't just reduce safety — it degraded all dimensions (d = 0.609)

Companion projects

Requirements

  • Python ≥ 3.11
  • An LLM API key for evaluation judging (strong model recommended — GPT-5.4 / Opus)

License

Apache 2.0

Comments

Loading comments...