Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Swarm

Cut your LLM costs by 200x. Offload parallel, batch, and research work to Gemini Flash workers instead of burning your expensive primary model.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
8 · 3.1k · 14 current installs · 16 all-time installs
byChairForce@Chair4ce
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill advertises itself as an instruction-only cost-savings orchestrator and the registry metadata lists no required environment variables, yet the included repository and SKILL.md clearly require LLM provider API keys (GEMINI_API_KEY, OPENAI_API_KEY, ANTHROPIC_API_KEY, GROQ_API_KEY) and optionally Supabase creds. README and setup steps instruct cloning the GitHub repo and running npm install — so this is not purely docs-only. The declared minimal requirement (node only, no env) is inconsistent with the actual needs.
!
Instruction Scope
SKILL.md and INSTALL.md instruct the agent/user to run a local daemon, run an interactive setup that validates API keys by calling provider endpoints, save API keys to files under ~/.config/clawdbot, enable web search grounding, and optionally add guidance to AGENTS.md so agents prefer Swarm for parallel tasks. These instructions go beyond a simple helper: they require network calls, persistent local services, writing secrets to disk, and altering agent guidance — all of which expand the attack surface and could bias agent behavior.
Install Mechanism
The registry claims 'no install spec' but the package includes full runtime code and README/setup that instructs git clone + npm install. The source is on GitHub (well-known host), which lowers some risk vs an arbitrary download, but the mismatch between 'instruction-only' metadata and the presence of executable code is concerning — automatic install behavior may be different than promised.
!
Credentials
Registry lists no required env vars or primary credential, but the code and docs repeatedly reference GEMINI_API_KEY and other provider keys, and tests/benchmarks mention SUPABASE_URL and SUPABASE_SERVICE_KEY. The setup wizard saves API keys to disk (~/.config/clawdbot/<provider>-key.txt). Requesting and persisting multiple provider and service credentials (not declared) is disproportionate to the metadata and should be explicitly disclosed before installation.
!
Persistence & Privilege
The skill runs a background daemon (http://localhost:9999), persists configuration and daily metrics under ~/.config/clawdbot, and writes provider API keys to disk with limited file permissions. While not marked always:true, the daemon is persistent and can make outbound requests to validate keys and perform searches. Persisting secrets locally and altering agent guidance (AGENTS.md) increase lasting privilege and potential exposure.
Scan Findings in Context
[ignore-previous-instructions] expected: The SKILL.md/CHANGELOG reference prompt-injection patterns and a security module that detects/ignores such attempts. The scanner likely matched document text describing the defense (legitimate), but any presence of these strings in runtime prompts should be audited.
[system-prompt-override] expected: References to system-prompt override appear in the docs (not as an instruction to exfiltrate). Still, SKILL.md and PUBLISHING.md include instructions that encourage modifying agent guidance and SKILL.md itself — this could be abused to influence agent/system prompts and should be reviewed.
What to consider before installing
Key points to consider before installing/using this skill: - Metadata mismatch: the registry claims no credentials and 'instruction-only', but the package includes runnable code and requires LLM provider API keys (e.g., GEMINI_API_KEY) and possibly Supabase keys. Do not assume no secrets are needed. - Secrets on disk: the setup wizard saves API keys under ~/.config/clawdbot (provider-key.txt). If you install, be aware secrets will be persisted locally; review save paths, file permissions, and consider using least-privilege keys or ephemeral/test keys. - Review source before running: because the repo contains executable JavaScript and a daemon, inspect the code (lib/, bin/) or run it in an isolated environment (container, VM) first. Pay attention to network calls (validateApiKey, web search grounding, any outbound telemetry) and any code that sends data off-host. - Network & provider scope: the skill performs provider API calls to validate keys and run worker requests (Gemini/OpenAI/Anthropic/Groq). Only provide keys scoped with minimal permissions and monitor usage/cost caps. Consider setting cost limits in config before heavy use. - AGENTS.md / prompt guidance: INSTALL.md suggests adding guidance to agent configuration so agents preferentially use Swarm. That can bias agent behavior — do not automatically apply these changes without review. - Run initial tests in sandbox: run npm run diagnose and the test suite in an isolated environment with dummy or limited credentials. Confirm what gets persisted (metrics, caches) and whether any unexpected endpoints are contacted. - Verify origin & integrity: confirm the GitHub repo (https://github.com/Chair4ce/node-scaling) is authentic and matches the published package. If you cannot confirm provenance, avoid installing runnable code into production agents. If you want, I can extract the exact files that read environment variables and list every referenced env var and file path so you can audit which secrets would be exposed or persisted.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.3.7
Download zip
latestvk979k65k9v0jm9fk0m2gtcdqgn81e261parallelvk971ncankyz8hrv36rcp1v501n7zzjajperformancevk971ncankyz8hrv36rcp1v501n7zzjajresearchvk971ncankyz8hrv36rcp1v501n7zzjajscalingvk971ncankyz8hrv36rcp1v501n7zzjajworkersvk971ncankyz8hrv36rcp1v501n7zzjaj

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🐝 Clawdis
Binsnode

SKILL.md

Swarm — Cut Your LLM Costs by 200x

Turn your expensive model into an affordable daily driver. Offload the boring stuff to Gemini Flash workers — parallel, batch, research — at a fraction of the cost.

At a Glance

30 tasks viaTimeCost
Opus (sequential)~30s~$0.50
Swarm (parallel)~1s~$0.003

When to Use

Swarm is ideal for:

  • 3+ independent tasks (research, summaries, comparisons)
  • Comparing or researching multiple subjects
  • Multiple URLs to fetch/analyze
  • Batch processing (documents, entities, facts)
  • Complex analysis needing multiple perspectives → use chain

Quick Reference

# Check daemon (do this every session)
swarm status

# Start if not running
swarm start

# Parallel prompts
swarm parallel "What is X?" "What is Y?" "What is Z?"

# Research multiple subjects
swarm research "OpenAI" "Anthropic" "Mistral" --topic "AI safety"

# Discover capabilities
swarm capabilities

Execution Modes

Parallel (v1.0)

N prompts → N workers simultaneously. Best for independent tasks.

swarm parallel "prompt1" "prompt2" "prompt3"

Research (v1.1)

Multi-phase: search → fetch → analyze. Uses Google Search grounding.

swarm research "Buildertrend" "Jobber" --topic "pricing 2026"

Chain (v1.3) — Refinement Pipelines

Data flows through multiple stages, each with a different perspective/filter. Stages run in sequence; tasks within a stage run in parallel.

Stage modes:

  • parallel — N inputs → N workers (same perspective)
  • single — merged input → 1 worker
  • fan-out — 1 input → N workers with DIFFERENT perspectives
  • reduce — N inputs → 1 synthesized output

Auto-chain — describe what you want, get an optimal pipeline:

curl -X POST http://localhost:9999/chain/auto \
  -d '{"task":"Find business opportunities","data":"...market data...","depth":"standard"}'

Manual chain:

swarm chain pipeline.json
# or
echo '{"stages":[...]}' | swarm chain --stdin

Depth presets: quick (2 stages), standard (4), deep (6), exhaustive (8)

Built-in perspectives: extractor, filter, enricher, analyst, synthesizer, challenger, optimizer, strategist, researcher, critic

Preview without executing:

curl -X POST http://localhost:9999/chain/preview \
  -d '{"task":"...","depth":"standard"}'

Benchmark (v1.3)

Compare single vs parallel vs chain on the same task with LLM-as-judge scoring.

curl -X POST http://localhost:9999/benchmark \
  -d '{"task":"Analyze X","data":"...","depth":"standard"}'

Scores on 6 FLASK dimensions: accuracy (2x weight), depth (1.5x), completeness, coherence, actionability (1.5x), nuance.

Capabilities Discovery (v1.3)

Lets the orchestrator discover what execution modes are available:

swarm capabilities
# or
curl http://localhost:9999/capabilities

Prompt Cache (v1.3.2)

LRU cache for LLM responses. 212x speedup on cache hits (parallel), 514x on chains.

  • Keyed by hash of instruction + input + perspective
  • 500 entries max, 1 hour TTL
  • Skips web search tasks (need fresh data)
  • Persists to disk across daemon restarts
  • Per-task bypass: set task.cache = false
# View cache stats
curl http://localhost:9999/cache

# Clear cache
curl -X DELETE http://localhost:9999/cache

Cache stats show in swarm status.

Stage Retry (v1.3.2)

If tasks fail within a chain stage, only the failed tasks get retried (not the whole stage). Default: 1 retry. Configurable per-phase via phase.retries or globally via options.stageRetries.

Cost Tracking (v1.3.1)

All endpoints return cost data in their complete event:

  • session — current daemon session totals
  • daily — persisted across restarts, accumulates all day
swarm status        # Shows session + daily cost
swarm savings       # Monthly savings report

Web Search (v1.1)

Workers search the live web via Google Search grounding (Gemini only, no extra cost).

# Research uses web search by default
swarm research "Subject" --topic "angle"

# Parallel with web search
curl -X POST http://localhost:9999/parallel \
  -d '{"prompts":["Current price of X?"],"options":{"webSearch":true}}'

JavaScript API

const { parallel, research } = require('~/clawd/skills/node-scaling/lib');
const { SwarmClient } = require('~/clawd/skills/node-scaling/lib/client');

// Simple parallel
const result = await parallel(['prompt1', 'prompt2', 'prompt3']);

// Client with streaming
const client = new SwarmClient();
for await (const event of client.parallel(prompts)) { ... }
for await (const event of client.research(subjects, topic)) { ... }

// Chain
const result = await client.chainSync({ task, data, depth });

Daemon Management

swarm start              # Start daemon (background)
swarm stop               # Stop daemon
swarm status             # Status, cost, cache stats
swarm restart            # Restart daemon
swarm savings            # Monthly savings report
swarm logs [N]           # Last N lines of daemon log

Performance (v1.3.2)

ModeTasksTimeNotes
Parallel (simple)5~700ms142ms/task effective
Parallel (stress)10~1.2s123ms/task effective
Chain (standard)5~14s3-stage multi-perspective
Chain (quick)2~3s2-stage extract+synthesize
Cache hitany~3-5ms200-500x speedup
Research (web)2~15sGoogle grounding latency

Config

Location: ~/.config/clawdbot/node-scaling.yaml

node_scaling:
  enabled: true
  limits:
    max_nodes: 16
    max_concurrent_api: 16
  provider:
    name: gemini
    model: gemini-2.0-flash
  web_search:
    enabled: true
    parallel_default: false
  cost:
    max_daily_spend: 10.00

Troubleshooting

IssueFix
Daemon not runningswarm start
No API keySet GEMINI_API_KEY or run npm run setup
Rate limitedLower max_concurrent_api in config
Web search not workingEnsure provider is gemini + web_search.enabled
Cache stale resultscurl -X DELETE http://localhost:9999/cache
Chain too slowUse depth: "quick" or check context size

Structured Output (v1.3.7)

Force JSON output with schema validation — zero parse failures on structured tasks.

# With built-in schema
curl -X POST http://localhost:9999/structured \
  -d '{"prompt":"Extract entities from: Tim Cook announced iPhone 17","schema":"entities"}'

# With custom schema
curl -X POST http://localhost:9999/structured \
  -d '{"prompt":"Classify this text","data":"...","schema":{"type":"object","properties":{"category":{"type":"string"}}}}'

# JSON mode (no schema, just force JSON)
curl -X POST http://localhost:9999/structured \
  -d '{"prompt":"Return a JSON object with name, age, city for a fictional person"}'

# List available schemas
curl http://localhost:9999/structured/schemas

Built-in schemas: entities, summary, comparison, actions, classification, qa

Uses Gemini's native response_mime_type: application/json + responseSchema for guaranteed JSON output. Includes schema validation on the response.

Majority Voting (v1.3.7)

Same prompt → N parallel executions → pick the best answer. Higher accuracy on factual/analytical tasks.

# Judge strategy (LLM picks best — most reliable)
curl -X POST http://localhost:9999/vote \
  -d '{"prompt":"What are the key factors in SaaS pricing?","n":3,"strategy":"judge"}'

# Similarity strategy (consensus — zero extra cost)
curl -X POST http://localhost:9999/vote \
  -d '{"prompt":"What year was Python released?","n":3,"strategy":"similarity"}'

# Longest strategy (heuristic — zero extra cost)
curl -X POST http://localhost:9999/vote \
  -d '{"prompt":"Explain recursion","n":3,"strategy":"longest"}'

Strategies:

  • judge — LLM scores all candidates on accuracy/completeness/clarity/actionability, picks winner (N+1 calls)
  • similarity — Jaccard word-set similarity, picks consensus answer (N calls, zero extra cost)
  • longest — Picks longest response as heuristic for thoroughness (N calls, zero extra cost)

When to use: Factual questions, critical decisions, or any task where accuracy > speed.

StrategyCallsExtra CostQuality
similarityN$0Good (consensus)
longestN$0Decent (heuristic)
judgeN+1~$0.0001Best (LLM-scored)

Self-Reflection (v1.3.5)

Optional critic pass after chain/skeleton output. Scores 5 dimensions, auto-refines if below threshold.

# Add reflect:true to any chain or skeleton request
curl -X POST http://localhost:9999/chain/auto \
  -d '{"task":"Analyze the AI chip market","data":"...","reflect":true}'

curl -X POST http://localhost:9999/skeleton \
  -d '{"task":"Write a market analysis","reflect":true}'

Proven: improved weak output from 5.0 → 7.6 avg score. Skeleton + reflect scored 9.4/10.

Skeleton-of-Thought (v1.3.6)

Generate outline → expand each section in parallel → merge into coherent document. Best for long-form content.

curl -X POST http://localhost:9999/skeleton \
  -d '{"task":"Write a comprehensive guide to SaaS pricing","maxSections":6,"reflect":true}'

Performance: 14,478 chars in 21s (675 chars/sec) — 5.1x more content than chain at 2.9x higher throughput.

MetricChainSkeleton-of-ThoughtWinner
Output size2,856 chars14,478 charsSoT (5.1x)
Throughput234 chars/sec675 chars/secSoT (2.9x)
Duration12s21sChain (faster)
Quality (w/ reflect)~7-8/109.4/10SoT

When to use what:

  • SoT → long-form content, reports, guides, docs (anything with natural sections)
  • Chain → analysis, research, adversarial review (anything needing multiple perspectives)
  • Parallel → independent tasks, batch processing
  • Structured → entity extraction, classification, any task needing reliable JSON
  • Voting → factual accuracy, critical decisions, consensus-building

API Endpoints

MethodPathDescription
GET/healthHealth check
GET/statusDetailed status + cost + cache
GET/capabilitiesDiscover execution modes
POST/parallelExecute N prompts in parallel
POST/researchMulti-phase web research
POST/skeletonSkeleton-of-Thought (outline → expand → merge)
POST/chainManual chain pipeline
POST/chain/autoAuto-build + execute chain
POST/chain/previewPreview chain without executing
POST/chain/templateExecute pre-built template
POST/structuredForced JSON with schema validation
GET/structured/schemasList built-in schemas
POST/voteMajority voting (best-of-N)
POST/benchmarkQuality comparison test
GET/templatesList chain templates
GET/cacheCache statistics
DELETE/cacheClear cache

Cost Comparison

ModelCost per 1M tokensRelative
Claude Opus 4~$15 input / $75 output1x
GPT-4o~$2.50 input / $10 output~7x cheaper
Gemini Flash~$0.075 input / $0.30 output200x cheaper

Cache hits are essentially free (~3-5ms, no API call).

Files

60 total
Select a file
Select a file to preview.

Comments

Loading comments…