Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Funky Fund Flamingo

v1.0.1

Repair-first self-evolution for OpenClaw — audit logs, memory, and skills; run measurable mutation cycles. Get paid. Evolve. Repeat. Dolla dolla bill y'all.

0· 642·2 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (repair-first, mutation cycles, revenue focus) aligns with the code and SKILL.md: the skill reads session logs, workspace memory, and the skills directory and produces evolution proposals and persistent state. That behavior is coherent with an evolution/meta-skill. However, embedded policy artifacts (master directive: must_evolve_each_cycle, no_op_forbidden) assert a stronger mandate than a typical 'run when asked' helper and are notable because they pressure continual mutation rather than optional inspections.
!
Instruction Scope
SKILL.md and the code instruct the agent to read local session transcripts (~/.openclaw/agents/<agent>/sessions/*.jsonl), MEMORY.md, USER.md, and the skills/ directory — all expected for an evolution tool — and to write persistent memory artifacts in memory/. Important risks: (1) extract_log explicitly 'treats the prompt as truth' and reconstructs an evolution history from generated prompts, which can let LLM-generated content be treated as authoritative input (self-reinforcing/poisoning). (2) master-directive and enforcement docs suggest forced evolution semantics (must_evolve_each_cycle/no_op_forbidden) which broaden the scope of changes the tool will consider acceptable. The SKILL.md warns that prompts produced by this skill may be sent to cloud LLM providers if the enclosing agent uses them, but that means sensitive local context could leave the host unless the user runs in dry-run/local-only modes.
Install Mechanism
No install spec / external downloads. This is an instruction+code bundle that runs with node and uses only fs/path/os; I found no download/extract or foreign package install in the provided files. That reduces supply-chain risk compared to remote archive installs.
Credentials
The skill requests no required credentials and only exposes reasonable optional env overrides (AGENT_NAME, MEMORY_DIR, size/time limits). It reads local agent session logs and memory files (sensitive data), which is coherent for its stated purpose. It also ships agent templates (openai/openrouter) that encourage cloud model use — SKILL.md notes this and warns about data leaving via the cloud model, but that remains a privacy decision for the user.
!
Persistence & Privilege
The skill is not always:true and does not request system-level permissions, but the included master-directive (must_evolve_each_cycle: true, no_op_forbidden: true, goal: 'Code Singularity') and execution-loop requirements indicate strong bias toward automatic/perpetual mutation. If an agent runs this skill autonomously (normal platform default) and review flags are not enforced, the combination of forced-mutation policy + relay/loop modes increases the risk of repeated, possibly unnecessary or surprising local file changes. The code does include review/dry-run flags and local-only safeguards (no remote git push by default), but the policy artifacts are more aggressive than most users likely expect.
What to consider before installing
What to check and how to reduce risk before installing: - Run in dry-run first: execute node index.js run --dry-run and inspect the generated prompt/artifacts in memory/ before letting any model consume them. Confirm the prompts do not expose secrets you don't want sent to a cloud LLM. - Use review mode by default: run with --review so the skill pauses before any significant edits; read the produced 'what_changed' and 'why_it_matters' sections carefully. - Backup your workspace: commit or copy your skills/, MEMORY.md, USER.md, and the entire workspace before running loops. That makes rollback trivial if the skill proposes bad mutations. - Inspect how changes would be applied: search the full codebase for any child_process/exec/spawn, git operations, or write locations outside memory/ (safeWriteFile does a subpath check, but verify the rest of the files you didn't review). If you find code that performs file ops beyond creating memory artifacts, treat it as higher risk. - Override aggressive defaults if desired: the bundle includes YAML/JSON directives that set must_evolve_each_cycle and no_op_forbidden; the README/TREE comments hint you can set local overrides. If you prefer conservative behavior, set those directives to false or avoid enabling loop/relay modes. - Avoid routing generated prompts through cloud models unless you accept data exfiltration risk: SKILL.md correctly warns that prompts may contain session excerpts and memory. If you must use a cloud model, redact or limit input and/or run the skill in dry-run and then manually review/submit trimmed prompts. - Be especially cautious about the 'treat the prompt as truth' behaviour: extract_log and other tools intentionally trust prompts as authoritative. This can create a self-reinforcing loop where model outputs become ingested as 'history' — consider disabling or auditing that extraction logic. If you want, I can (1) scan the remaining truncated files for exec/network calls, (2) point to exact lines that write files and where, or (3) suggest minimal configuration changes to make this skill conservative by default.

Like a lobster shell, security has layers — review code before you run it.

automatevk974hyn0gan9wtn1xxzjzjhyr981cv02automationvk974hyn0gan9wtn1xxzjzjhyr981cv02cashvk974hyn0gan9wtn1xxzjzjhyr981cv02evolvevk974hyn0gan9wtn1xxzjzjhyr981cv02flamingovk974hyn0gan9wtn1xxzjzjhyr981cv02funvk974hyn0gan9wtn1xxzjzjhyr981cv02fundvk974hyn0gan9wtn1xxzjzjhyr981cv02funkyvk974hyn0gan9wtn1xxzjzjhyr981cv02googlevk974hyn0gan9wtn1xxzjzjhyr981cv02harry-pottervk974hyn0gan9wtn1xxzjzjhyr981cv02icantbelieveiatethewholethingvk974hyn0gan9wtn1xxzjzjhyr981cv02jesusvk974hyn0gan9wtn1xxzjzjhyr981cv02latestvk977t67x5sf8bp6g48t4ms64nn81e3szmakingvk974hyn0gan9wtn1xxzjzjhyr981cv02moneyvk974hyn0gan9wtn1xxzjzjhyr981cv02motor-oilvk974hyn0gan9wtn1xxzjzjhyr981cv02omgvk974hyn0gan9wtn1xxzjzjhyr981cv02pinkvk974hyn0gan9wtn1xxzjzjhyr981cv02telegramvk974hyn0gan9wtn1xxzjzjhyr981cv02tiktokvk974hyn0gan9wtn1xxzjzjhyr981cv02topvk974hyn0gan9wtn1xxzjzjhyr981cv02top-10vk974hyn0gan9wtn1xxzjzjhyr981cv02top10vk974hyn0gan9wtn1xxzjzjhyr981cv02xvk974hyn0gan9wtn1xxzjzjhyr981cv02youtubevk974hyn0gan9wtn1xxzjzjhyr981cv02

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🦩 Clawdis

Comments