Back to skill
v1.3.7

Swarm

ReviewClawScan verdict for this skill. Analyzed May 1, 2026, 4:54 AM.

Analysis

Swarm appears to be a real parallel-LLM tool, but users should review it carefully because it runs a background local API, handles API keys, and includes under-declared external runtime and service-key behavior.

GuidanceInstall only if you intend to run a local LLM-worker daemon. Pin or review the GitHub source before running npm install/setup, use limited-spend provider keys, avoid running Supabase-related benchmark code with production credentials, restrict access to localhost:9999, and clear the cache or stop the daemon when you are done.

Findings (7)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Agentic Supply Chain Vulnerabilities
SeverityMediumConfidenceHighStatusConcern
README.md
git clone https://github.com/Chair4ce/node-scaling.git
cd node-scaling
npm install
npm run setup

The documented install path pulls and runs an external npm project even though the registry says there is no install spec; the runtime is not pinned to a reviewed version in the skill metadata.

User impactA user may run code and dependencies from the repository state they clone, not necessarily the reviewed skill artifact, while that runtime handles API keys and starts a daemon.
RecommendationProvide a pinned install spec or tag/commit, declare dependencies and credentials in metadata, and review the cloned repository before running npm install or setup.
Rogue Agents
SeverityLowConfidenceHighStatusNote
SKILL.md
swarm start              # Start daemon (background)
swarm stop               # Stop daemon

The skill intentionally starts a long-running background daemon. This is disclosed and central to pre-warmed worker performance, but it persists until stopped.

User impactThe worker service may remain available across tasks and can continue consuming local resources or provider quota when invoked.
RecommendationStart the daemon only when needed, monitor status/logs, and stop it when finished.
Unexpected Code Execution
SeverityLowConfidenceHighStatusNote
bin/setup.js
process.env[provider.envVar] = apiKey; ... runDiagnostics({ runTests: true, skipE2e: false });

The setup wizard runs diagnostics/tests after collecting the API key. This is disclosed as verification, but it can execute local test code and make provider calls.

User impactInitial setup may consume provider quota and run included diagnostics/test scripts.
RecommendationReview setup output, use a limited-spend key, and provide a setup option that skips e2e/API tests unless explicitly requested.
Permission boundary

Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.

Identity and Privilege Abuse
SeverityLowConfidenceHighStatusNote
bin/setup.js
apiKey = await ask('  API Key: '); ... fs.writeFileSync(keyPath, config.apiKey, { mode: 0o600 });

The setup flow collects a provider API key and stores it locally. This is expected for the skill's LLM-provider purpose and uses restrictive file permissions, but it is sensitive authority.

User impactThe skill can spend quota or money on the configured LLM provider account and stores the key under the user's config directory.
RecommendationUse a dedicated, limited-spend API key, confirm the configured provider and cost limits, and remove the key file if uninstalling the skill.
Identity and Privilege Abuse
SeverityMediumConfidenceHighStatusConcern
benchmark-deep.js
if (process.env.SUPABASE_URL && process.env.SUPABASE_SERVICE_KEY) { ... await supabase.from('swarm_blackboard').delete().like('task_id', 'bench-%'); }

A benchmark path uses a Supabase service key and performs a delete operation. That credential and database mutation are not part of the stated primary provider setup.

User impactIf a user runs the deep benchmark with Supabase service credentials present, the script can mutate data in the configured Supabase project.
RecommendationRemove or isolate this benchmark path, avoid service-role credentials, require an explicit test project, and document any Supabase usage separately.
Sensitive data protection

Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.

Insecure Inter-Agent Communication
SeverityMediumConfidenceMediumStatusConcern
README.md
The daemon exposes a local HTTP API on port 9999: ... curl -X POST http://localhost:9999/parallel

The documented interface accepts prompt/data submissions over localhost HTTP, and the docs do not describe authentication, caller identity checks, or per-caller boundaries.

User impactOther local processes or agents could potentially submit data to the daemon, trigger provider calls, or incur API cost if they can reach the local port.
RecommendationBind only to localhost, add an access token or equivalent local authorization, document caller boundaries, and avoid sending sensitive data through the daemon unless intended.
Memory and Context Poisoning
SeverityLowConfidenceHighStatusNote
SKILL.md
- 500 entries max, 1 hour TTL
- Persists to disk across daemon restarts

The prompt cache stores LLM responses locally for reuse across daemon restarts. The documented limits reduce the risk, but prompts/results may include sensitive task context.

User impactRecent prompt outputs may remain on disk and be reused within the cache window.
RecommendationDo not submit sensitive prompts unless appropriate, and use the documented cache-clear endpoint when needed.