Parallel Agents
WarnAudited by ClawScan on May 10, 2026.
Overview
This skill is transparent about spawning parallel AI sub-agents, but those sub-agents can inherit the host agent’s full tools and access without clear permission, cost, or action boundaries.
Only install or use this skill if you intentionally want to delegate work to multiple real AI sub-agents. Keep tasks narrow, avoid secrets or sensitive files, set strict timeouts/retry/model limits, and make sure your OpenClaw environment restricts what spawned sessions can do.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A spawned sub-agent may be able to use the same local, account, or workspace permissions as the main agent, increasing the chance of unintended actions or data access.
The skill delegates the host agent’s full tool authority to spawned sub-agents, but the artifacts do not define which tools, credentials, files, or account actions are allowed.
Able to use all the same tools as the host
Use this only in environments where sub-agent tool access is restricted, and require explicit approval for file changes, account actions, network calls, or other sensitive operations.
Multiple agents could independently use powerful tools in parallel, which can amplify mistakes such as editing files, using APIs, or taking actions the user did not separately approve.
The tool exposure is broad and inherited by multiple spawned agents; the artifacts do not show scoping, per-tool allowlists, or approval gates for high-impact tool use.
Each agent has same tools/access as host
Limit sub-agents to read-only or task-specific tools where possible, and avoid running broad autonomous tasks unless you have reviewed the exact prompt and permitted tools.
Sensitive prompts, file contents, or account data included in a task may be copied into sub-agent sessions and later reused or trusted by the main agent without clear safeguards.
The skill coordinates data between the host and spawned sessions through session history, but does not define data boundaries, sensitive-content handling, or origin trust rules for sub-agent outputs.
from tools import sessions_list, sessions_history ... output = final_message['content']
Do not send secrets or private data to sub-agents unless the session storage and provider handling are acceptable; verify sub-agent outputs before using them for decisions or actions.
A bad prompt or failing task could be retried and escalated across more capable and expensive models, potentially multiplying cost or repeated tool actions.
The helper automatically escalates failed work across models and the documentation encourages parallel spawning, but the artifacts do not define cost budgets, concurrency limits for helper calls, or containment for repeated failed actions.
MODEL_HIERARCHY = ["anthropic/claude-haiku-4-5", "kimi-coding/k2p5", "anthropic/claude-opus-4-5"]
Set explicit maximum agent counts, retry limits, timeouts, and model/cost budgets before using the helper functions.
Users have less context for who maintains the code and where updates come from.
The skill includes local Python helper code but has no source or homepage listed, which limits provenance review even though no suspicious static findings were reported.
Source: unknown; Homepage: none
Review the included Python files before use and prefer installing from a known, versioned source when available.
