Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Auto Evolve
v3.9.1Automates skill code improvement with LLM-driven analysis, effect and cost tracking, dependency awareness, issue auto-closing, smart scheduling, and multi-la...
⭐ 0· 40·0 current·0 all-time
byGao.QiLin@relunctance
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's name/description (automated code improvement, scheduling, effect/cost tracking, dependency analysis, issue auto-closing) matches what the code implements: LLM-driven analysis, effect/cost trackers, dependency analysis, test runners, IssueLinker (uses `gh`), scheduler integration (openclaw cron), git operations and release creation. However the package metadata and SKILL.md declare no required binaries or env vars while the code and docs assume many external tools/credentials (git, gh CLI, pytest/coverage, openclaw CLI, and LLM credentials). That mismatch is unexpected and reduces transparency.
Instruction Scope
SKILL.md instructs the agent to read repository files (README, SKILL.md), write manifests under .auto-evolve/.iterations and .learnings, run scans and tests, call LLM via OpenClaw config, and create/commit/push changes. The runtime docs and code also perform operations that touch git history, create releases, and call the `gh` CLI to list and close issues. These are within the declared purpose but the instructions implicitly require access to repos, git remotes, and GitHub credentials (not declared). The skill will send code/context to an LLM configured in OpenClaw — potentially including project code and pending diffs — so sensitive repo content may be sent to external LLMs depending on platform configuration.
Install Mechanism
There is no install spec (instruction-only) in the registry, which is lower risk for arbitrary downloads. But the bundle contains many executable Python scripts and tests which will be run by the agent; absence of an install step means the skill expects runtime environment to already satisfy dependencies. That expectation is not declared, which is a transparency/usability issue but not itself a code-download risk.
Credentials
The skill declares no required environment variables or primary credential, yet SKILL.md and code reference LLM credentials (OPENAI_API_KEY / MINIMAX_API_KEY or OpenClaw LLM config), and the code invokes external CLIs (`gh`, `git`, `pytest`, `openclaw`). It also auto-closes GitHub issues and pushes commits/releases, actions that require authenticated CLI sessions or git credentials. Asking for none of these in metadata is disproportionate and misleading: the skill will operate only with credentials and tools present, and those are not surfaced to the user.
Persistence & Privilege
The skill is not set always:true (good). It can schedule recurring scans and call `openclaw cron add` (if available) and will create local artifacts (.auto-evolve/, .iterations/, .learnings/) and git commits/tags. Those are elevated behaviors (autonomous changes, cron jobs, remote pushes, issue closing). Autonomy combined with ability to push changes and close issues increases blast radius — the skill does have those capabilities in code and docs, so the user must treat it as high‑impact. There is no evidence it modifies other skills' configs, but it will create cron jobs and write to repo and config paths.
What to consider before installing
This package will read and modify repositories, run tests, call an LLM configured in OpenClaw (and may use OPENAI_API_KEY/MINIMAX_API_KEY), create git commits/tags, push to remote, and use the `gh` CLI to comment/close issues — but the skill metadata declares no required binaries or env vars. Before installing or enabling:
- Assume it will act on repositories it is configured to monitor: run it only on test repositories first. Use `--dry-run` and inspect generated `.iterations`/`pending-review.json` before confirming.
- Audit the Python scripts (especially scripts/auto-evolve.py and IssueLinker/Changelog functionality). Look for network endpoints and LLM prompt contents.
- Ensure you understand how your system authenticates `git` and `gh` (the skill will rely on existing CLI auth); if you don't want it pushing changes or closing issues, do not provide those credentials or run in `semi-auto` mode only.
- Run in isolated environment (isolated account/container) and avoid scheduling full-auto cron until comfortable.
- If you need to proceed, restrict monitored repositories in config to a single sandbox repo, set `mode` to `semi-auto`, and verify quality gates (tests, py_compile) actually run in your environment.
Because required CLIs/credentials are not declared, treat the omission as a transparency risk — ask the maintainer to explicitly list required binaries and env vars and to add safeguards (explicit prompts before remote pushes/issue-closing) before enabling automatic runs on production repos.Like a lobster shell, security has layers — review code before you run it.
latestvk97519egshg2qnkcw4va965x8h848bq0
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
