Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Parallel Agents
v3.2.0Spawns real AI-powered OpenClaw sub-sessions to run multiple specialized agents concurrently for content, dev, QA, docs, and autonomous workflows.
⭐ 0· 1.5k·8 current·10 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The files (SKILL.md, README, helpers.py, ai_orchestrator.py, examples) and runtime instructions match the name 'Parallel Agents' — the code and docs implement an orchestrator that calls OpenClaw's sessions_spawn to create real sub-sessions. No unrelated environment variables, binaries, or installs are requested. The lack of a short description in metadata is a minor documentation gap but not an incoherence.
Instruction Scope
The SKILL.md and usage docs instruct the agent to call sessions_spawn with arbitrary 'task' strings and show examples that read local files (e.g., open('app.py').read()) and then send that content to spawned agents for review. The docs explicitly state spawned agents 'are able to use all the same tools as the host.' That means the skill's instructions can cause local files, code, or other runtime state to be transmitted to separate AI sessions (and thus to whatever model/service backs those sessions). The SKILL.md also contains prompt-like material and the repository triggered a 'system-prompt-override' scanner finding — the docs themselves include large system prompts and templates which could be abused or manipulated.
Install Mechanism
No install spec and no external downloads; this is instruction + code packaged with the skill. Because there is no network/install step defined here, nothing arbitrary is pulled during install. The attacker surface comes from runtime behavior, not install-time downloads.
Credentials
The skill requests no env vars, which is reasonable, but it explicitly encourages patterns that read local files and spawn child sessions that inherit host tools. If the host agent has access to credentials, network tools, or sensitive files, child sessions may be able to observe or use that data. The number of required env variables is zero, but the documented behavior effectively requests access to the host runtime and its files — this is a high-impact capability that must be proportional to the user's safety posture.
Persistence & Privilege
The skill does not request 'always: true' and does not persistently modify other skills. However, because spawned child sessions are described as having the same tool access as the host, autonomy (allowed by default) increases blast radius: the orchestrator enables creating autonomous sub-sessions that can be invoked during agent runs. This is expected for an orchestrator but worth flagging as a privilege-expanding pattern when used in sessions that have access to secrets or networked tools.
Scan Findings in Context
[system-prompt-override] unexpected: The skill includes large, explicit system prompts and instructions intended for spawned agents; the scanner flagged a 'system-prompt-override' pattern. Prompts/templates are expected in an orchestrator, but this finding signals the docs contain content that could be used to override system-level instructions or to manipulate model behavior. Review prompt usage carefully.
What to consider before installing
This skill implements a legitimate-sounding parallel-agent orchestrator — the code and docs align with that purpose. However, before installing or running it, consider the following: 1) Spawned agents are 'real' sub-sessions that the skill says will have 'all the same tools as the host' — if your host session has access to secrets, file I/O, or networked services, child agents may be able to read and transmit that data. 2) The docs and examples explicitly show reading local files (e.g., open('app.py').read()) and passing their contents to spawned agents — avoid doing that in sessions with sensitive data. 3) A prompt-injection pattern was detected in SKILL.md; review all system prompts and templates in ai_orchestrator.py and helpers.py to ensure they don't instruct child agents to exfiltrate, override safeguards, or run privileged actions. 4) Mitigations: run this skill only in agent sessions that do not hold credentials or sensitive files; sandbox the host so spawned agents cannot access secrets; remove or sanitize example code that reads local files; require admin approval before allowing sessions_spawn in environments with privileged access; audit and possibly restrict which tools child sessions can use. If the vendor can provide documentation guaranteeing spawned sessions do not inherit secret-bearing credentials or can be configured to run with minimal privileges, that would lower the risk and increase my confidence.Like a lobster shell, security has layers — review code before you run it.
latestvk976kd5wfg5mj0xjxy8585r3yn80s113
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
