Swarm
Analysis
Swarm appears to be a real parallel-LLM tool, but users should review it carefully because it runs a background local API, handles API keys, and includes under-declared external runtime and service-key behavior.
Findings (7)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
git clone https://github.com/Chair4ce/node-scaling.git cd node-scaling npm install npm run setup
The documented install path pulls and runs an external npm project even though the registry says there is no install spec; the runtime is not pinned to a reviewed version in the skill metadata.
swarm start # Start daemon (background) swarm stop # Stop daemon
The skill intentionally starts a long-running background daemon. This is disclosed and central to pre-warmed worker performance, but it persists until stopped.
process.env[provider.envVar] = apiKey; ... runDiagnostics({ runTests: true, skipE2e: false });The setup wizard runs diagnostics/tests after collecting the API key. This is disclosed as verification, but it can execute local test code and make provider calls.
Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.
apiKey = await ask(' API Key: '); ... fs.writeFileSync(keyPath, config.apiKey, { mode: 0o600 });The setup flow collects a provider API key and stores it locally. This is expected for the skill's LLM-provider purpose and uses restrictive file permissions, but it is sensitive authority.
if (process.env.SUPABASE_URL && process.env.SUPABASE_SERVICE_KEY) { ... await supabase.from('swarm_blackboard').delete().like('task_id', 'bench-%'); }A benchmark path uses a Supabase service key and performs a delete operation. That credential and database mutation are not part of the stated primary provider setup.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
The daemon exposes a local HTTP API on port 9999: ... curl -X POST http://localhost:9999/parallel
The documented interface accepts prompt/data submissions over localhost HTTP, and the docs do not describe authentication, caller identity checks, or per-caller boundaries.
- 500 entries max, 1 hour TTL - Persists to disk across daemon restarts
The prompt cache stores LLM responses locally for reuse across daemon restarts. The documented limits reduce the risk, but prompts/results may include sensitive task context.
