Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

swarma - growth loops

v1.0.0

Agent teams that run growth experiments and build their own playbook. GROWS loop: generate hypothesis, run experiment, observe signal, weigh verdict, stack p...

0· 91·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for glitch-rabin/swarma.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "swarma - growth loops" (glitch-rabin/swarma) from ClawHub.
Skill page: https://clawhub.ai/glitch-rabin/swarma
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install swarma

ClawHub CLI

Package manager switcher

npx clawhub@latest install swarma
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (growth experiment loops, agent teams) match the SKILL.md functionality (teams, cycles, scoring, playbooks). However the registry metadata lists no required env vars or binaries while SKILL.md explicitly requires a runtime (Python/pip/terminal) and an OPENROUTER_API_KEY. The absence of declared runtime/credentials in the registry is an incoherence that should be resolved.
Instruction Scope
SKILL.md contains concrete runtime instructions (CLI commands like swarma cycle, serve, run; read/write strategy.md; import CSV metrics; start REST/MCP server). These are within the skill's stated purpose, but they instruct the agent to run servers and continuous engines and to read/write experiment files. There is no instruction to access unrelated system files or secrets beyond the OpenRouter key, but the instructions give the agent capability to open network endpoints and import arbitrary CSVs which increases risk if left unchecked.
!
Install Mechanism
No install spec is provided even though SKILL.md lists compatibility (Python 3.11+, pip) and references a GitHub repo. That means an agent following the skill may be expected to install a package at runtime (pip from GitHub or similar). Instruction-only skills that implicitly require installing third-party code increase risk because they can cause arbitrary code to be fetched/executed; the upstream package source and exact install steps are not declared in the registry metadata.
Credentials
SKILL.md declares a single required environment variable (OPENROUTER_API_KEY) for LLM calls, which is proportionate for an LLM-driven experiment runner. However, the registry record lists no required env vars — this inconsistency is concerning and should be clarified. No other credentials or sensitive environment paths are demanded in the SKILL.md.
Persistence & Privilege
The skill does not request always:true and uses the normal autonomous-invocation default. That is not a problem by itself. However, the skill instructs running scheduled engines and starting REST/MCP servers (persistent network-facing processes). If the agent is allowed to invoke this autonomously, those capabilities amplify risk — verify whether you'll allow the skill to run continuously or expose network ports.
What to consider before installing
This skill appears to implement the advertised growth-experiment loop, but there are a few mismatches and practical risks to address before installing: - Confirm required credentials: SKILL.md requires OPENROUTER_API_KEY for LLM usage but the registry shows no required env vars. Only provide this key if you trust the upstream code and understand its LLM usage and limits. - Confirm install/source: SKILL.md refers to Python/pip and a GitHub repo but the registry has no install spec. Ask the publisher for an explicit install plan (exact pip package name or vetted release URL) and inspect the repository before running pip install. - Sandbox runtime installs and servers: Because the skill can start servers (REST/MCP) and a scheduled engine, consider running it in a sandboxed environment (container or VM) and restricting network exposure until you audit the code. - Review code/repo: Before giving any API keys or allowing autonomous runs, review the repository (https://github.com/glitch-rabin/swarma) or ask the author for a trustworthy release. Look for unexpected network calls, telemetry, or credential exfiltration code. - Least privilege: Provide only the minimum API key scope needed for OpenRouter (or prefer ephemeral/test keys) and avoid supplying high-privilege secrets (cloud credentials, SSH keys, database passwords). If the publisher can (a) add an explicit install spec, (b) reconcile registry metadata with SKILL.md (declare OPENROUTER_API_KEY requirement), and (c) provide a vetted release link or package name, that would materially reduce the uncertainty.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bgt3875qr3ds155nd6zm7s183tc94
91downloads
0stars
1versions
Updated 4w ago
v1.0.0
MIT-0

swarma -- growth experiment loop for agent teams

When to Use This Skill

Use swarma when the user wants to:

  • Run growth experiments (hooks, landing pages, outreach, pricing, activation, retention)
  • Build agent teams that learn and improve through A/B testing, not just execute once
  • Get a validated playbook of what actually works for their specific audience/product
  • Test ideas at scale (50+ experiments/week instead of 2-5)
  • Replace "we tried that, it didn't work" with logged, analyzed, searchable experiment data

Trigger phrases: "test what works", "optimize my funnel", "find the best hooks", "run experiments", "A/B test", "what's working", "build a playbook", "growth experiments", "improve conversion"

Do NOT use when: user wants workflow automation (use n8n/Make), conversation memory (use honcho), or one-shot agent pipelines (use CrewAI/AutoGen). swarma is specifically for experiment loops that improve over time.


Quick Reference

Commands at a Glance

CommandWhat it doesWhen to use
swarma initCreate instance + starter teamFirst-time setup
swarma cycle <team>Run one experiment cycleTesting, manual runs
swarma cycle <team> --topic "..."Run cycle with a specific topicAd-hoc experiments
swarma team create <name> --from-goal "..."Generate team from a goalStarting a new experiment area
swarma team show <name>Inspect a team's configReviewing what was generated
swarma team listShow all teamsOverview
swarma statusCosts, recent runs, experimentsHealth check
swarma metric log <team> <agent> <value>Log external metricFeeding real-world data
swarma metric import <team> <csv>Bulk import metricsBatch data ingestion
swarma metric show <team>View logged metricsReviewing performance
swarma serve --port 8282Start REST APIExternal integrations
swarma serve --mcpStart MCP serverClaude Code / Hermes integration
swarma runStart scheduled engineContinuous operation
swarma expert listBrowse reasoning lensesExploring expert frameworks

Decision: Which Squad Template?

User wants to improve...Use this squadAARRR stage
Opening lines / hookshook-labAcquisition
Landing page copylanding-labAcquisition
SEO rankingsseo-engineAcquisition
Cold outreach response ratescold-outboundAcquisition
Multi-platform contentchannel-mixAcquisition
Signup-to-value onboardingactivation-flowActivation
Pricing and packagingpricing-labRevenue
Churn and retentionretention-squadRetention
Viral loops and referralsreferral-engineReferral
Market positioningcompetitive-intel--
Short-form video pipelinefaceless-factoryAcquisition
Ad creative testingad-creative-labAcquisition
UGC content simulationugc-factoryAcquisition
Programmatic SEOprogrammatic-seoAcquisition
Newsletter growthnewsletter-engineRetention
Paid + organic loopsacquisition-squadAcquisition
Community-led growthcommunity-engineRetention
AI commerce optimizationagentic-storefrontRevenue

Decision: Generate vs Template?

SituationApproach
User has a specific, well-defined goalswarma team create --from-goal (let AI design the team)
Goal matches an existing squad templateCopy template, then customize
User wants to experiment broadlyStart with hook-lab (most general)
User doesn't know where to startAsk about their funnel bottleneck, then pick

The GROWS Loop (Core Concept)

Every experiment cycle follows five steps:

  Generate       Run         Observe       Weigh        Stack
 hypothesis --> experiment --> signal --> verdict --> playbook
     ^                                                  |
     └──────────────────────────────────────────────────┘
StepWhat happensWhere in code
G -- GenerateAgent reads strategy.md, proposes a hypothesiscore/cycle.py
R -- RunAgent executes with hypothesis active, produces outputflow/executor.py
O -- ObserveSeparate cheap LLM scores output (1-10, forced decimals)core/agent.py
W -- WeighAfter 5 cycles, compare average vs baseline. >20% = keep/discardcore/experiment.py
S -- StackValidated patterns written to strategy.md + playbookcore/agent.py

Key numbers:

  • Verdict threshold: 20% improvement to keep, 20% decline to discard
  • Default min_sample_size: 5 cycles before verdict
  • Scoring: 1-10 scale with forced decimals (7.3, not 7)

Setup Guide

Platform: Claude Code / Claude Desktop

pip install swarma
swarma init

Add to .mcp.json:

{
  "mcpServers": {
    "swarma": {
      "command": "swarma",
      "args": ["serve", "--mcp"],
      "env": { "OPENROUTER_API_KEY": "sk-or-..." }
    }
  }
}

Important: OPENROUTER_API_KEY must be in the MCP env block. The instance .env is not inherited by subprocesses.

Platform: Hermes (via terminal)

Hermes has terminal access -- it can run swarma CLI commands directly. No MCP required.

pip install swarma
swarma init

Then tell Hermes: "run swarma cycle hook-lab --topic 'AI agents are overhyped'"

Hermes reads terminal output and acts on results. For structured access, add MCP:

# hermes config.yaml
mcp_servers:
  swarma:
    transport: stdio
    command: swarma
    args: ["serve", "--mcp"]
    env:
      OPENROUTER_API_KEY: "sk-or-..."

Platform: OpenClaw

pip install swarma
swarma init

Configure as MCP tool or use terminal access depending on your OpenClaw setup.

Platform: CLI (standalone)

pip install swarma
swarma init                                        # creates instance + starter team
swarma cycle starter --topic "why do startups fail?"   # run one cycle
swarma status                                      # check costs, runs, experiments

From source

git clone https://github.com/glitch-rabin/swarma.git
cd swarma && pip install -e .
swarma init

Environment setup

After swarma init, add your API key:

echo "OPENROUTER_API_KEY=sk-or-..." >> ~/.swarma/instances/default/.env

Get a key at openrouter.ai/keys.

Optional (for cross-team knowledge):

# Only needed when running 3+ teams
echo "QMD_ENDPOINT=http://localhost:8181/mcp" >> ~/.swarma/instances/default/.env

Onboarding Flow

When a user wants to set up swarma, follow this sequence. The team generator is the fastest path -- don't make users configure agents manually.

Step 1: Understand the goal

Ask:

  • "What do you want to improve?" (conversion, engagement, outreach response rate, SEO rankings, etc.)
  • "Who is your audience?" (B2B SaaS users, crypto community, enterprise buyers, etc.)
  • "What does success look like?" (more signups, higher CTR, better reply rates, etc.)

Step 2: Install

pip install swarma
swarma init --yes

Step 3: Generate the team

This is the key step. Use the team generator instead of picking templates.

swarma team create growth-lab \
  --from-goal "optimize landing page conversion for our B2B SaaS" \
  --context "developer tools company, 500 free users, 2% conversion to paid" \
  --budget 30

The generator:

  1. Designs the team (2-5 agents with specific roles)
  2. Picks models that fit each role
  3. Writes agent instructions and experiment patterns
  4. Creates a first experiment hypothesis ready to run

Review what it generated:

swarma team show growth-lab

Step 4: Run the first cycle

swarma cycle growth-lab

Expected output:

Running cycle: growth-lab
  flow: researcher -> copywriter -> judge
  agents: ['researcher', 'copywriter', 'judge']

                              Cycle: growth-lab
  Agent      Model              Cost       Output Preview
  researcher sonar-pro          $0.000384  **Topic:** 52% of executives...
  copywriter qwen3.5-plus-02-15 $0.000746  [A] We sent 4,382 cold emails...
  judge      mistral-nemo       $0.000416  **Hook Variations:** A: "Did...

  duration: 43.9s | total cost: $0.001546 | agents: 3

Step 5: Run more cycles and review

swarma cycle growth-lab                    # run another cycle
swarma cycle growth-lab --topic "specific angle"  # with a topic
swarma status                              # check progress

After 5 cycles, the experiment engine issues its first verdict. The strategy file evolves automatically.


Day-to-Day Usage

Running experiments

# Single cycle
swarma cycle hook-lab

# With a specific topic
swarma cycle hook-lab --topic "AI agents are commoditizing"

# Continuous (teams with cron schedules run automatically)
swarma run

# Continuous with API server
swarma run --port 8282

Feeding real metrics

LLM self-eval is a starting proxy. For production, feed back real-world signals:

# Log a single metric
swarma metric log hook-lab copywriter 4.2 --metric ctr_pct

# Attach to a specific experiment
swarma metric log hook-lab copywriter 127 --metric impressions --exp 3

# Add a note
swarma metric log hook-lab copywriter 5.1 --metric ctr_pct --note "from linkedin analytics"

# Bulk import from CSV
swarma metric import hook-lab metrics.csv

# View logged metrics
swarma metric show hook-lab

CSV format: agent,value,metric_name,note

copywriter,4.2,ctr_pct,week 1
copywriter,5.1,ctr_pct,week 2
researcher,7.8,relevance_score,

Using squad templates

# Copy a template to your instance
cp -r "$(python -c "import swarma; print(swarma.__path__[0])")/examples/hook-lab" \
  ~/.swarma/instances/default/teams/hook-lab

# Or if you cloned the repo
cp -r examples/hook-lab ~/.swarma/instances/default/teams/hook-lab

# Run it
swarma cycle hook-lab --topic "why most startups fail"

Checking status

swarma status

Shows: all teams, recent runs, costs (today + this month), pending plans, queue stats.


MCP Tools Reference

When connected via MCP, these 16 tools are available:

ToolDescriptionParameters
swarma_healthCheck if swarma is running--
swarma_list_teamsList all configured teams--
swarma_get_teamGet team details (agents, flow, schedule)team_id
swarma_list_agentsList agents in a teamteam_id
swarma_run_agentRun a single agent with optional contextteam_id, agent_id, context?
swarma_run_cycleRun a full cycle for a teamteam_id, topic?
swarma_statusInstance status (costs, runs, experiments)--
swarma_costsCost breakdown (today, this month)--
swarma_list_plansShow pending experiment plansteam_id?
swarma_approve_planApprove a pending experiment planplan_id
swarma_reject_planReject a pending planplan_id, reason?
swarma_get_outputsRecent outputs from agentsteam_id?, agent_id?, limit?
swarma_list_toolsList available agent tools--
swarma_list_expertsBrowse expert reasoning lenses--
swarma_get_expertGet expert details by IDexpert_id
swarma_generate_teamGenerate a new team from a goalname, goal, context?, budget?

Common MCP Workflows

"What's been happening?"

  1. swarma_status -- overview
  2. swarma_get_outputs -- recent agent outputs
  3. swarma_list_plans -- pending experiments

"Run an experiment"

  1. swarma_run_cycle with team_id and optional topic
  2. swarma_get_outputs to review results

"Start a new experiment area"

  1. swarma_generate_team with goal and context
  2. swarma_get_team to review what was generated
  3. swarma_run_cycle to kick it off

"What's working?"

  1. swarma_get_outputs for recent results
  2. Read the team's strategy.md for validated patterns

Team Configuration Reference

A team is a folder. No code required.

teams/my-squad/
├── team.yaml          # goal, flow, schedule, budget
├── program.md         # team context and constraints
└── agents/
    ├── researcher.yaml
    ├── writer.yaml
    └── strategy.md    # pre-seeded growth knowledge (evolves automatically)

team.yaml

name: my-squad
goal: find what works.
flow: "researcher -> writer"        # sequential
# flow: "researcher -> [writer, analyst]"  # parallel
schedule: "0 8 * * 1-5"            # optional: weekdays at 8am
budget: 30                          # optional: monthly budget in $

agent.yaml

id: writer
name: Writer
instructions: |
  turn research into a post. max 200 words.
  hook in the first line. practitioner voice.
model: qwen/qwen3.5-plus-02-15     # optional: override default routing
metric:
  name: content_quality
  target: 8.0
experiment_config:
  min_sample_size: 5
  auto_propose: true

strategy.md (evolves automatically)

Starts with seed knowledge, grows with every validated experiment:

### Validated Patterns

**Specificity wins**
- Hooks with specific numbers outperform vague claims by 2-3x on saves
- "47% of startups" > "most startups"

### Anti-patterns (Discarded)
- Generic inspirational openings: -23% vs baseline. Discard.

### Patterns to Test
- [ ] First-person confession vs third-person case study
- [ ] Time-anchored ("In 2024...") vs timeless hooks

Flow DSL

# Sequential: a runs, output passes to b
flow: "researcher -> writer"

# Parallel: a runs, then b and c run concurrently
flow: "researcher -> [writer, analyst]"

# Mixed: sequential then parallel then sequential
flow: "researcher -> [writer, analyst] -> judge"

Cross-Team Knowledge (QMD)

By default, each team learns individually via its own strategy.md. To share knowledge across teams, wire in QMD:

# ~/.swarma/instances/default/config.yaml
knowledge:
  engine: qmd
  qmd_endpoint: http://localhost:8181/mcp

With QMD: team A discovers loss framing beats gain framing, team B sees that pattern in its next cycle. Anti-patterns are shared too.

You don't need QMD until running 3+ teams. Most users start without it.


Troubleshooting

ProblemCauseFix
"No API key found"Missing OPENROUTER_API_KEYAdd to ~/.swarma/instances/default/.env
MCP subprocess can't find keyInstance .env not inheritedPass key in MCP config env block
"No teams found"Empty instanceRun swarma init or copy a squad template
Experiments not issuing verdictsNot enough cyclesNeed min_sample_size (default 5) completed cycles
Strategy file not evolvingNo verdict yetRun more cycles, check swarma status
swarma cycle shows $0.000000 costModel returned emptyCheck API key validity, try swarma cycle starter
QMD not connectingQMD not runningStart with qmd serve before swarma
Results.tsv emptyNo cycles completedRun at least one cycle first

Verification

After setup, verify everything works:

# 1. Run a cycle
swarma cycle starter --topic "test run"
# Expected: table showing agent outputs + costs

# 2. Check status
swarma status
# Expected: teams listed, recent run shown, costs displayed

# 3. Check a real squad (if installed)
swarma team show hook-lab
# Expected: team config with agents, flow, metrics

If all three pass, the GROWS loop is operational.


What swarma Is Not

swarma is not...Use this insteadThe difference
memoryhonchoswarma doesn't remember conversations. it runs experiment loops.
workflow automationn8n, Make, Zapierthose connect apps. swarma runs hypotheses and learns from results.
a prompt libraryagency-agentsswarma teaches agents what works through feedback. templates go in, playbooks come out.
agent orchestrationCrewAI, AutoGen, LangGraphthose run pipelines. swarma adds the GROWS loop that makes pipelines improve.
a hosted service--self-hosted. your data stays on your machine.

Comments

Loading comments...