Swarm
ReviewAudited by ClawScan on May 10, 2026.
Overview
Swarm appears purpose-aligned, but it deserves review because it can start a persistent local LLM-worker daemon, use stored provider API keys for paid external calls, and cache prompt data.
Review before installing. If you use Swarm, run setup yourself, use a dedicated low-spend API key, confirm the daemon stays on localhost, disable or clear cache for sensitive work, and require approval before the agent starts the daemon or launches large batches.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
An agent could start or use the local service in a way that spends provider credits or sends task data to external models.
The instructions encourage recurring daemon use and direct localhost API calls. Those endpoints can cause external worker LLM activity, so approval and access controls should be explicit.
# Check daemon (do this every session) ... swarm status ... # Start if not running ... swarm start ... curl -X POST http://localhost:9999/chain/auto
Require explicit user approval before starting the daemon or invoking batch/benchmark/chain endpoints; keep the port bound to localhost, avoid exposing it, and configure cost limits.
The worker service may continue running after the immediate task and may remain available to process future requests.
The skill runs a background daemon, but it is disclosed and includes a stop command.
swarm start # Start daemon (background) swarm stop # Stop daemon
Start it only when needed, check `swarm status`, and stop it after sensitive or cost-sensitive work.
The skill can use your LLM provider account and quota for worker requests.
Setup stores provider API keys locally with restrictive file permissions; this is purpose-aligned, but users should notice because registry metadata declares no primary credential.
const keyPath = path.join(CONFIG_DIR, `${config.provider}-key.txt`); fs.writeFileSync(keyPath, config.apiKey, { mode: 0o600 });Use dedicated API keys with provider-side spend limits, protect the config directory, and ask the maintainer to declare credentials in metadata.
Sensitive prompt outputs may remain on disk temporarily and could be reused if caching is enabled.
Prompt/response cache data can persist locally and be reused across daemon restarts, though the docs disclose TTL, bypass, and clear-cache controls.
LRU cache for LLM responses ... 500 entries max, 1 hour TTL ... Persists to disk across daemon restarts ... Per-task bypass: set `task.cache = false`
Disable caching for sensitive tasks, clear the cache after use, and verify the cache storage location and permissions.
Private task content may leave the local machine and be processed by third-party AI providers.
Task prompts and data are intentionally sent to external LLM/search providers; this is core to the skill but should be understood before use.
Supported Providers ... Google Gemini ... Groq ... OpenAI ... Anthropic ... Web search grounding ... Google Search
Do not send secrets or regulated data unless the selected provider and account settings are approved for that use.
Installing the runtime can execute dependency install/setup code outside the registry’s declared install mechanism.
The runtime setup relies on manually cloning and installing a Node project from GitHub, while the registry says there is no install spec.
git clone https://github.com/Chair4ce/node-scaling.git ... npm install ... npm run setup
Review the repository, package.json, and package-lock before running npm install or setup, and prefer pinned releases.
