ClawSwarm

PassAudited by ClawScan on May 10, 2026.

Overview

ClawSwarm appears to do what it says, but using it will send your configured forecasting context to an LLM provider with your API key and may make many paid API requests.

Before installing, make sure you are comfortable sending the configured forecasting context to the selected LLM provider. Use environment variables for API keys, avoid sensitive private data in configs, verify any `base_url`, start with small agent counts or `--dry-run`, and install dependencies in a trusted virtual environment.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A large or unreviewed config could consume API quota or incur provider charges.

Why it was flagged

The config controls how many agents are expanded, and each agent can trigger an LLM API request. This is core to the swarm purpose, but it can create significant cost, quota, or rate-limit impact.

Skill content
count = group.get('count', 1) ... for i in range(count): ... result = call_llm(agent, target, api_config)
Recommendation

Start with `--dry-run` or a small agent count, review `count`, `provider`, `base_url`, and `delay_ms`, and monitor API usage.

What this means

Anyone who can read a config containing a direct API key, or redirect a config to an untrusted endpoint, may misuse provider credentials.

Why it was flagged

The runner reads a configured API key and sends it as a bearer token to the selected LLM endpoint. This is expected for provider access, but it is account/billing authority.

Skill content
api_key = os.environ.get(api_config.get('api_key_env', 'GROQ_API_KEY'), api_config.get('api_key', '')) ... headers['Authorization'] = f'Bearer {api_key}'
Recommendation

Prefer environment variables over direct `api_key` config values, keep configs private, use scoped provider keys when possible, and only run trusted configs.

What this means

Private forecasting inputs placed in the config may be transmitted to a hosted LLM provider or any `base_url` chosen in the config.

Why it was flagged

The role prompt, target price, and target context are sent to the configured LLM provider or override URL. This data flow is disclosed and purpose-aligned, but the context may contain private strategy or market data.

Skill content
system_prompt = f"""{agent['role']} ... {target.get('context', '')}""" ... url = api_config.get('base_url') ... requests.post(url, json=payload, headers=headers, timeout=30)
Recommendation

Avoid putting sensitive or proprietary data in `target.context` unless you trust the configured provider and endpoint.

What this means

Different dependency versions or a compromised local Python environment could affect runtime behavior.

Why it was flagged

The README suggests manually installing unpinned PyPI packages. This is not auto-executed and the packages are common, but dependency versions and sources are not locked by the artifacts.

Skill content
pip install numpy pyyaml requests
Recommendation

Use a virtual environment and pin known-good versions if you plan to rely on the skill regularly.