Parallel Agents

WarnAudited by ClawScan on May 10, 2026.

Overview

This skill is transparent about spawning parallel AI sub-agents, but those sub-agents can inherit the host agent’s full tools and access without clear permission, cost, or action boundaries.

Only install or use this skill if you intentionally want to delegate work to multiple real AI sub-agents. Keep tasks narrow, avoid secrets or sensitive files, set strict timeouts/retry/model limits, and make sure your OpenClaw environment restricts what spawned sessions can do.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A spawned sub-agent may be able to use the same local, account, or workspace permissions as the main agent, increasing the chance of unintended actions or data access.

Why it was flagged

The skill delegates the host agent’s full tool authority to spawned sub-agents, but the artifacts do not define which tools, credentials, files, or account actions are allowed.

Skill content
Able to use all the same tools as the host
Recommendation

Use this only in environments where sub-agent tool access is restricted, and require explicit approval for file changes, account actions, network calls, or other sensitive operations.

What this means

Multiple agents could independently use powerful tools in parallel, which can amplify mistakes such as editing files, using APIs, or taking actions the user did not separately approve.

Why it was flagged

The tool exposure is broad and inherited by multiple spawned agents; the artifacts do not show scoping, per-tool allowlists, or approval gates for high-impact tool use.

Skill content
Each agent has same tools/access as host
Recommendation

Limit sub-agents to read-only or task-specific tools where possible, and avoid running broad autonomous tasks unless you have reviewed the exact prompt and permitted tools.

What this means

Sensitive prompts, file contents, or account data included in a task may be copied into sub-agent sessions and later reused or trusted by the main agent without clear safeguards.

Why it was flagged

The skill coordinates data between the host and spawned sessions through session history, but does not define data boundaries, sensitive-content handling, or origin trust rules for sub-agent outputs.

Skill content
from tools import sessions_list, sessions_history ... output = final_message['content']
Recommendation

Do not send secrets or private data to sub-agents unless the session storage and provider handling are acceptable; verify sub-agent outputs before using them for decisions or actions.

What this means

A bad prompt or failing task could be retried and escalated across more capable and expensive models, potentially multiplying cost or repeated tool actions.

Why it was flagged

The helper automatically escalates failed work across models and the documentation encourages parallel spawning, but the artifacts do not define cost budgets, concurrency limits for helper calls, or containment for repeated failed actions.

Skill content
MODEL_HIERARCHY = ["anthropic/claude-haiku-4-5", "kimi-coding/k2p5", "anthropic/claude-opus-4-5"]
Recommendation

Set explicit maximum agent counts, retry limits, timeouts, and model/cost budgets before using the helper functions.

What this means

Users have less context for who maintains the code and where updates come from.

Why it was flagged

The skill includes local Python helper code but has no source or homepage listed, which limits provenance review even though no suspicious static findings were reported.

Skill content
Source: unknown; Homepage: none
Recommendation

Review the included Python files before use and prefer installing from a known, versioned source when available.