Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Funky Fund Flamingo

v1.0.1

Repair-first self-evolution for OpenClaw — audit logs, memory, and skills; run measurable mutation cycles. Get paid. Evolve. Repeat. Dolla dolla bill y'all.

0· 760·2 current·2 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for icemastert/funky-fund-flamingo.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Funky Fund Flamingo" (icemastert/funky-fund-flamingo) from ClawHub.
Skill page: https://clawhub.ai/icemastert/funky-fund-flamingo
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install icemastert/funky-fund-flamingo

ClawHub CLI

Package manager switcher

npx clawhub@latest install funky-fund-flamingo
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (repair-first, mutation cycles, revenue focus) aligns with the code and SKILL.md: the skill reads session logs, workspace memory, and the skills directory and produces evolution proposals and persistent state. That behavior is coherent with an evolution/meta-skill. However, embedded policy artifacts (master directive: must_evolve_each_cycle, no_op_forbidden) assert a stronger mandate than a typical 'run when asked' helper and are notable because they pressure continual mutation rather than optional inspections.
!
Instruction Scope
SKILL.md and the code instruct the agent to read local session transcripts (~/.openclaw/agents/<agent>/sessions/*.jsonl), MEMORY.md, USER.md, and the skills/ directory — all expected for an evolution tool — and to write persistent memory artifacts in memory/. Important risks: (1) extract_log explicitly 'treats the prompt as truth' and reconstructs an evolution history from generated prompts, which can let LLM-generated content be treated as authoritative input (self-reinforcing/poisoning). (2) master-directive and enforcement docs suggest forced evolution semantics (must_evolve_each_cycle/no_op_forbidden) which broaden the scope of changes the tool will consider acceptable. The SKILL.md warns that prompts produced by this skill may be sent to cloud LLM providers if the enclosing agent uses them, but that means sensitive local context could leave the host unless the user runs in dry-run/local-only modes.
Install Mechanism
No install spec / external downloads. This is an instruction+code bundle that runs with node and uses only fs/path/os; I found no download/extract or foreign package install in the provided files. That reduces supply-chain risk compared to remote archive installs.
Credentials
The skill requests no required credentials and only exposes reasonable optional env overrides (AGENT_NAME, MEMORY_DIR, size/time limits). It reads local agent session logs and memory files (sensitive data), which is coherent for its stated purpose. It also ships agent templates (openai/openrouter) that encourage cloud model use — SKILL.md notes this and warns about data leaving via the cloud model, but that remains a privacy decision for the user.
!
Persistence & Privilege
The skill is not always:true and does not request system-level permissions, but the included master-directive (must_evolve_each_cycle: true, no_op_forbidden: true, goal: 'Code Singularity') and execution-loop requirements indicate strong bias toward automatic/perpetual mutation. If an agent runs this skill autonomously (normal platform default) and review flags are not enforced, the combination of forced-mutation policy + relay/loop modes increases the risk of repeated, possibly unnecessary or surprising local file changes. The code does include review/dry-run flags and local-only safeguards (no remote git push by default), but the policy artifacts are more aggressive than most users likely expect.
What to consider before installing
What to check and how to reduce risk before installing: - Run in dry-run first: execute node index.js run --dry-run and inspect the generated prompt/artifacts in memory/ before letting any model consume them. Confirm the prompts do not expose secrets you don't want sent to a cloud LLM. - Use review mode by default: run with --review so the skill pauses before any significant edits; read the produced 'what_changed' and 'why_it_matters' sections carefully. - Backup your workspace: commit or copy your skills/, MEMORY.md, USER.md, and the entire workspace before running loops. That makes rollback trivial if the skill proposes bad mutations. - Inspect how changes would be applied: search the full codebase for any child_process/exec/spawn, git operations, or write locations outside memory/ (safeWriteFile does a subpath check, but verify the rest of the files you didn't review). If you find code that performs file ops beyond creating memory artifacts, treat it as higher risk. - Override aggressive defaults if desired: the bundle includes YAML/JSON directives that set must_evolve_each_cycle and no_op_forbidden; the README/TREE comments hint you can set local overrides. If you prefer conservative behavior, set those directives to false or avoid enabling loop/relay modes. - Avoid routing generated prompts through cloud models unless you accept data exfiltration risk: SKILL.md correctly warns that prompts may contain session excerpts and memory. If you must use a cloud model, redact or limit input and/or run the skill in dry-run and then manually review/submit trimmed prompts. - Be especially cautious about the 'treat the prompt as truth' behaviour: extract_log and other tools intentionally trust prompts as authoritative. This can create a self-reinforcing loop where model outputs become ingested as 'history' — consider disabling or auditing that extraction logic. If you want, I can (1) scan the remaining truncated files for exec/network calls, (2) point to exact lines that write files and where, or (3) suggest minimal configuration changes to make this skill conservative by default.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🦩 Clawdis
automatevk974hyn0gan9wtn1xxzjzjhyr981cv02automationvk974hyn0gan9wtn1xxzjzjhyr981cv02cashvk974hyn0gan9wtn1xxzjzjhyr981cv02evolvevk974hyn0gan9wtn1xxzjzjhyr981cv02flamingovk974hyn0gan9wtn1xxzjzjhyr981cv02funvk974hyn0gan9wtn1xxzjzjhyr981cv02fundvk974hyn0gan9wtn1xxzjzjhyr981cv02funkyvk974hyn0gan9wtn1xxzjzjhyr981cv02googlevk974hyn0gan9wtn1xxzjzjhyr981cv02harry-pottervk974hyn0gan9wtn1xxzjzjhyr981cv02icantbelieveiatethewholethingvk974hyn0gan9wtn1xxzjzjhyr981cv02jesusvk974hyn0gan9wtn1xxzjzjhyr981cv02latestvk977t67x5sf8bp6g48t4ms64nn81e3szmakingvk974hyn0gan9wtn1xxzjzjhyr981cv02moneyvk974hyn0gan9wtn1xxzjzjhyr981cv02motor-oilvk974hyn0gan9wtn1xxzjzjhyr981cv02omgvk974hyn0gan9wtn1xxzjzjhyr981cv02pinkvk974hyn0gan9wtn1xxzjzjhyr981cv02telegramvk974hyn0gan9wtn1xxzjzjhyr981cv02tiktokvk974hyn0gan9wtn1xxzjzjhyr981cv02topvk974hyn0gan9wtn1xxzjzjhyr981cv02top-10vk974hyn0gan9wtn1xxzjzjhyr981cv02top10vk974hyn0gan9wtn1xxzjzjhyr981cv02xvk974hyn0gan9wtn1xxzjzjhyr981cv02youtubevk974hyn0gan9wtn1xxzjzjhyr981cv02
760downloads
0stars
2versions
Updated 9h ago
v1.0.1
MIT-0

🦩 Funky Fund Flamingo — Make That Paper

Use this skill when you're ready to get paid. We inspect reality, kill breakage and value leaks, and run mutation cycles that produce concrete gains — so the stack earns, not just runs.

When To Use

  • Runtime logs are screaming and you need structured repair so shit works and keeps making money.
  • The agent is stable but stagnating — time for a deliberate capability mutation that moves the needle.
  • You want one execution plan that combines logs + memory + skills into a cycle that pays off.
  • You need continuous relay mode (--loop / --funky-fund-flamingo) so evolution runs in the background and the revenue keeps flowing.

Inputs And Context

  • Session logs: ~/.openclaw/agents/<agent>/sessions/*.jsonl
  • Workspace memory: MEMORY.md, memory/YYYY-MM-DD.md, USER.md
  • Installed skills list from workspace skills/
  • Optional environment overrides from ../../.env

Entrypoints

  • Main runner: index.js
  • Prompt builder and cycle logic: evolve.js

Run from workspace root:

node skills/funky-fund-flamingo/index.js run

Run from inside this skill directory:

node index.js run

Execution Modes

# single cycle — one shot, max impact
node index.js run

# alias command
node index.js /evolve

# human confirmation before significant edits (protect the bag)
node index.js run --review

# prompt generation only (writes prompt artifact to memory dir)
node index.js run --dry-run

# continuous relay — keep the money printer running
node index.js --loop
node index.js run --funky-fund-flamingo

Cycle Contract

Each cycle should:

  1. Inspect recent session transcript — find breakages, repetition, value leaks.
  2. Read memory + user context so evolution is aligned with what actually pays.
  3. Select mutation mode (repair, optimize, expand, instrument, personalize).
  4. Produce actionable mutation instructions and reporting so you see the ROI.
  5. Persist state (memory/evolution_state.json) and optionally schedule the next loop.
  6. Persist long-term evolution learning (memory/funky_fund_flamingo_persistent_memory.json) so strategy compounds and the bag gets bigger.

Safety Constraints (Protect the Bag)

  • No destructive git/file ops unless explicitly requested.
  • Repair and reliability first when instability is detected — downtime = no revenue.
  • Respect review mode before applying significant edits.
  • Keep changes scoped and explainable; no no-op cycles that burn compute for nothing.
  • Local-only execution: no publish, no push to remote git, no external tool spawning without intent.

External Endpoints

URLData sentPurpose
None (from this skill's code)This skill's Node.js code does not open sockets or make HTTP requests. It only reads/writes local files.

Important: The repo includes agent config templates (agents/openai.yaml, agents/openrouter.yaml) for use by an OpenClaw (or other) agent. When you run an agent that uses this skill with a cloud model (OpenAI, OpenRouter, etc.), that agent will send the prompts this skill builds — which can include excerpts from session logs, memory, and workspace context — to the provider's API. So "local-only" applies to the skill binary itself; if the skill is invoked by an agent backed by a third-party LLM, data can leave the machine via that agent. To stay fully local, run node index.js run (or --dry-run) without routing the generated prompt through a cloud model.

Security & Privacy

  • Reads: Session logs under ~/.openclaw/agents/<agent>/sessions/*.jsonl, workspace MEMORY.md, memory/, USER.md, and the skills/ directory.
  • Writes: memory/evolution_state.json, memory/funky_fund_flamingo_persistent_memory.json, and optionally prompt artifacts in the memory dir. This skill does not push or publish anywhere; any outbound data is only via whatever agent/model stack you choose to run.
  • No network from skill code: The skill itself does not open sockets or make HTTP requests.

Optional environment variables

No env vars are required. The following are optional overrides (see evolve.js / README):

VariablePurposeTypical default
AGENT_NAMEAgent session folder under ~/.openclaw/agents/main
MEMORY_DIRDirectory for evolution state and persistent memoryworkspace memory/
TARGET_SESSION_BYTESMax bytes read from latest session log64000
LOOP_MIN_INTERVAL_SECONDSMin delay between loop cycles900
MAX_MEMORY_CHARS, MAX_TODAY_LOG_CHARS, MAX_PERSISTENT_MEMORY_CHARSContent caps for promptssee evolve.js
ECONOMIC_KEYWORDSComma-separated keywords for value scoringbuilt-in list
EVOLVE_REPORT_DIRECTIVE, EVOLVE_EXTRA_MODES, EVOLVE_ENABLE_SESSION_ARCHIVEBehavior tweaks

Model invocation

Evolution can be run manually (node index.js run) or by an agent that uses this skill. In relay mode (--loop / --funky-fund-flamingo), this process only plans and writes prompts; it does not call any model API. If you run an agent that consumes this skill with OpenAI/OpenRouter/etc., that agent will perform the model calls. To avoid sending local context to a provider, run the skill in --dry-run and do not feed the generated prompt to a cloud model.

Master directive and mutation pressure

The master directive (funky-fund-flamingo-master-directive.json) sets must_evolve_each_cycle and no_op_forbidden, which push every cycle toward making a concrete change. That can increase how often local files are mutated. For lower risk, prefer --review (confirm before significant edits) or --dry-run (prompt generation only, no writes). You can also edit or override the directive to relax these flags.

Trust Statement

By using this skill, you run Node.js code that reads and writes files in your OpenClaw workspace and agent session directories. This skill's code does not send data to third parties; if an agent that uses this skill calls a cloud LLM, that agent (not this skill binary) sends the prompt. Only install if you trust the skill source (e.g. ClawHub and the publisher).

Supporting References

  • ADL.md — anti-degeneration so we don't break the money printer
  • VFM.md — value-focused mutation: only changes that pay
  • TREE.md — capability topology and revenue-ready nodes
  • .clawhub/FMEP.md (forced mutation execution policy)

Minimal Verification

node index.js --help

Dolla, dolla bill y'all. 🦩

Comments

Loading comments...