Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Venture Delegation

v1.0.0

Opus-level strategic decomposition for any opportunity, project, or task. Breaks work into atomic pieces with evals, assigns each to the cheapest capable mod...

0· 75·0 current·0 all-time
byKairoKid@dodge1218

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dodge1218/venture-delegation.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Venture Delegation" (dodge1218/venture-delegation) from ClawHub.
Skill page: https://clawhub.ai/dodge1218/venture-delegation
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install venture-delegation

ClawHub CLI

Package manager switcher

npx clawhub@latest install venture-delegation
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (decompose tasks, assign to cheaper models, create atomic tasks with verifiable checks) matches the SKILL.md content. No unrelated credentials, binaries, or installs are requested; the model assignment table and evals are coherent with the stated purpose.
Instruction Scope
Runtime instructions direct the agent to write artifacts (files/URLs), run machine-verifiable evals (test -f, npm run build, curl, grep, screenshots, etc.), and to scaffold code. Those actions are appropriate for a delegation tool but do require filesystem, build-tool, and network capabilities — the skill does not attempt to read unrelated system files or environment variables, but it does give the agent discretion to create/execute project commands.
Install Mechanism
Instruction-only skill with no install spec and no external downloads; nothing will be written to disk by an installer. Lowest-risk install surface.
Credentials
The skill requests no environment variables, credentials, or config paths. Model names are referenced internally but do not imply external credentials. The level of access requested is proportional to the purpose.
Persistence & Privilege
always:false and default autonomous invocation are set. The skill does not request permanent presence or modification of other skills or system-wide configuration.
Assessment
This skill appears coherent, but keep these practical cautions in mind before enabling it: 1) It expects the agent to create files and run build/eval commands (npm, curl, shell tests). Only allow it where you trust those actions — sandbox or restrict filesystem and network access if possible. 2) Generated atoms may run project scripts that execute code. Review atoms and eval commands before permitting execution. 3) The skill routes work to different internal model names (flash, sonnet, etc.) which can incur costs — verify model availability and cost policies. 4) No secrets are requested by the skill, but if you attach it to a workspace with credentials or CI/CD, ensure it cannot access unrelated secrets. 5) If you want tighter control, require manual approval of the atomic task plan before execution.

Like a lobster shell, security has layers — review code before you run it.

latestvk971wv9gcrhk6pg4yrk9p56pt9843gfz
75downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Delegation — Think Once, Execute Cheap

Opus is the brain. Everything else is hands. This skill is the translation layer between insight and execution.

Core Principle

Never use a $0.10/task model for $0.002/task work. Never use a $0.002/task model for $0.10/task thinking.

OPPORTUNITY ──► OPUS THINKS ──► ATOMIC TASK QUEUE ──► CHEAP MODELS EXECUTE ──► EVALS VERIFY ──► SHIP
                 (once)           (written artifact)     (sonnet/flash/gptoss)   (automated)

Phase 0: VENTURE EVAL (new ideas only — skip for defined tasks)

When Ryan has a raw idea, before any decomposition, run the Venture Eval Protocol. This replaces VC due diligence with builder-optimized evaluation. 3 rounds max, then decide.

Round 1: Irrational Optimism (Flash — cheap, fast)

Prompt a cheap model to go MAXIMUM bullish. No skepticism allowed.

You are an irrationally optimistic founder evaluating this idea: {IDEA}

Assume everything goes right. Answer:
1. WHAT: one sentence
2. WHO PAYS: specific buyer (not "businesses" — name the persona)
3. WHY NOW: what changed in the world that makes this possible today
4. TAM: bottoms-up, not "1% of $X billion" — how many buyers × price
5. UNFAIR EDGE: what do we already have (infra, data, distribution, skills)
6. FIRST $1K: exact steps to the first thousand dollars in revenue
7. 10X SCENARIO: what does this look like if everything works for 2 years
8. EXISTING ASSETS: which of our products/pipelines/skills does this plug into

Round 2: Brutal Fix (Sonnet — stronger reasoning)

Take Round 1's output and try to KILL it:

You are a ruthless VC partner reviewing this pitch: {ROUND_1_OUTPUT}

For each of the 8 points, either:
- CONFIRM: evidence supports it, cite why
- FIX: the claim is wrong but fixable — here's how
- KILL: this is fundamentally broken and unfixable — here's why

Then answer:
- BIGGEST RISK: the one thing that kills this
- CAN WE TEST IT FOR <$100?: yes/no + how
- COMPARABLE EXITS: 3 companies in adjacent space that sold/IPO'd
- VERDICT: BUILD / PARK / KILL (with one-sentence reason)

Round 3: Questionnaire (only if Round 2 says BUILD)

If the idea survives, generate 5 questions that MUST be answered before committing resources:

Based on this evaluated idea: {ROUND_2_OUTPUT}

Generate exactly 5 questions where:
- Each question can be answered with data (not opinion)
- Each answer changes the build plan materially
- Each can be researched in <30 minutes
- Format: QUESTION | HOW TO ANSWER | WHAT CHANGES IF YES vs NO

Ryan answers the 5 questions → answers feed into Phase 1 decomposition.

Token Budget for Venture Eval

RoundModelEst. TokensCost
R1: Optimismflash~1500$0.003
R2: Fixsonnet~2000$0.06
R3: Questionsflash~800$0.002
Total~4300$0.065

If we can't resolve it in $0.07 of reasoning, the idea isn't clear enough. Park it and revisit when more signal arrives.

When NOT to Venture Eval

  • Task is already defined (bug fix, feature request, maintenance)
  • Ryan explicitly says what to build
  • It's a client project with specs
  • It's infrastructure/tooling work

Phase 1: THINK (Opus only — ~500-2000 tokens output)

This is the ONLY phase that uses Opus. Everything after is delegated.

1a. Opportunity Frame (if Venture Eval was skipped)

Answer in ≤150 words:

WHAT: [one sentence — what is this]
WHO: [target customer — be specific, not "SMBs"]
WHY NOW: [timing signal — regulation, tech shift, market gap]
TAM: [total addressable market — even rough napkin math]
COMPETITORS: [top 3, their weakness]
OUR EDGE: [what we have that they don't — existing infra, distribution, data]
SLICE: [the specific wedge we'd enter with — not the whole market]

1b. Decompose into Atoms

Break the work into the smallest independently testable units.

Rules:

  • Each atom has ONE clear output (a file, a URL, a data point, a yes/no answer)
  • Each atom can be verified by a machine (not "looks good" — a command that returns pass/fail)
  • Each atom takes <15 min for a sub-agent
  • If an atom takes >15 min, it's not atomic — split again
  • Dependencies are explicit (atom B needs atom A's output file)

Output format:

| # | Atom | Output | Eval | Model | Depends | Est. |
|---|------|--------|------|-------|---------|------|
| 1 | Research competitor pricing | `research/pricing.md` | ≥3 competitors listed | flash | — | 3m |
| 2 | Scaffold Next.js app | `src/app/page.tsx` | `npm run build` exits 0 | sonnet | — | 5m |

1c. Model Assignment

Is it code generation?          → sonnet
Is it bulk/template/classify?   → flash
Is it batch of 20+?            → gptoss
Does it need >100K context?    → gemini-pro
Is it client-facing copy?      → opus (exception)
Is it a yes/no check?          → flash

Default: flash. Only upgrade when there's a reason.

1d. Eval Specification

Every atom gets a machine-verifiable eval:

Eval TypeExampleCheck
File existsresearch/pricing.mdtest -f research/pricing.md
Build passesNext.js buildsnpm run build; echo $? → 0
HTTP 200Site is livecurl -so /dev/null -w "%{http_code}" [url] → 200
Content check≥3 competitorsgrep -c "^##" research/pricing.md ≥ 3
ScreenshotUI renders correctlyBrowser screenshot + image model eval

Phase 2: PLAN (still Opus, just ordering — fast)

  1. Topological sort by dependencies
  2. Group into waves (parallel atoms)
  3. Estimate total time = longest path through dependency graph
  4. Estimate total cost = sum of (model cost × est. time)

Write full plan to workspace/DELEGATION_PLAN.md.

Phase 3: EXECUTE (Opus hands off — never touches work again)

Hand DELEGATION_PLAN.md to orchestrator → spawner pipeline.

Opus's ONLY role during execution: monitor completion events, re-route on failure.

Opus does NOT: write code, generate content, run builds, do research queries.

Phase 4: EVAL (automated)

After each atom: run eval command → pass/fail → retry with model escalation if needed.

fail → retry same model → upgrade model (flash→sonnet→opus) → mark failed

Phase 5: LEARN (feeds auto-improve)

Append timing + pass/fail per atom to .learnings/LEARNINGS.md.

Anti-patterns

  • ❌ Opus writing code
  • ❌ Subjective evals ("looks nice")
  • ❌ Atoms bigger than 15 min
  • ❌ Skipping the frame for new opportunities
  • ❌ Re-thinking during execution (plan is locked after Phase 2)
  • ❌ Spending >$0.07 reasoning about an unvalidated idea
  • ❌ Using gptoss for <20 items

Comments

Loading comments...