Researching In Parallel

v1.4.2

Research any topic thoroughly by running three sub-agents with distinct analytical lenses (breadth, critique, evidence), then giving their outputs to a final...

0· 295·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill is a multi-pass research orchestrator. Its request to read prompt templates, write prompt files, spawn sub-agents, fetch web/PDF content, and save outputs/extracts is appropriate and expected for deep research. The SKILL.md's requirement that the skill be installed inside the agent's workspace is consistent with the need for spawned sub-agents to access local prompt templates and assets.
Instruction Scope
The runtime instructions instruct the main agent and sub-agents to read skill-config.json, assemble prompts, check for a user-supplied provided-sources file, copy files into the run workspace if needed, run web_fetch/browser/PDF tools to retrieve full text, and save cleaned source extracts and multiple output files inside the specified workspace directory. These actions are coherent with the research purpose, but they do involve reading and writing files in whatever workspace path is supplied and performing network fetches. Of particular note: the SKILL.md says preparatory work (reading config, checking workspace, identifying provided sources) may be completed without waiting for explicit confirmation — this means the skill may read files in the workspace during setup prior to a spawn confirmation.
Install Mechanism
No install spec or external downloads are present; the skill is primarily instruction- and template-driven with one helper script (assemble_prompts.py). No package downloads, remote installers, or unusual URLs are used. This is low-risk from an install/execution standpoint.
Credentials
The skill declares no required environment variables, credentials, or config paths. The skill-config.json includes model strings (provider/model identifiers) to guide session spawns, but it does not request secrets. Requiring web_fetch/browser/PDF tools and sessions_spawn is appropriate for its purpose and does not imply disproportionate credential access.
Persistence & Privilege
always is false and the skill does not request elevated runtime privileges or modification of other skills. The one privilege to note is the explicit requirement that the skill be installed inside the agent's workspace so sub-agents can access its assets; installing into a workspace gives the skill and spawned sub-agents read/write access to files under that workspace, which is necessary for functionality but should be considered when choosing the install location.
Assessment
This skill appears to do exactly what it says: orchestrate three research sub-agents, fetch full-text sources, and save structured outputs to a workspace. Key things to consider before installing or running it: - Workspace placement: the SKILL.md requires installing the skill directory inside the agent's workspace so sub-agents can read its templates. That is necessary for the skill to work, but it also means the skill (and sub-agents it spawns) will have read/write access to files under that workspace. Do not install it into a workspace that contains sensitive files you do not want the skill to read. - File reads during setup: the skill states it may perform preparatory reads (reading skill-config.json, checking the workspace, looking for provided-sources files) without waiting for an explicit spawn confirmation. If you have sensitive files in the workspace, consider moving them or using a new empty directory for runs. - Network and scraping: sub-agents will use web_fetch, a browser tool, and a PDF extractor to retrieve full-text sources from the web. This is expected for deep research, but it means the agent will make outbound requests to external sites. If your environment restricts external network access, note that research depth will be reduced. - Saved extracts: the default skill-config.json sets save_source_extracts = true. Source extracts are intended to contain only the content actually used, but they will be written to the workspace. If you prefer not to retain extracts, set save_source_extracts to false before running. - Model selection: the config includes provider/model strings as defaults. These are guidance for sessions_spawn; actual model usage is governed by your OpenClaw allowlist and runtime. If you want to constrain which models/providers the skill uses, add an allowlist in your openclaw.json (agents.defaults.models). - Inspect files if concerned: because the skill is instruction-driven, you can review SKILL.md, the prompt templates in references/prompts, and scripts/assemble_prompts.py before installing. There are no hidden download URLs or credential requests in the package. Overall, the package is internally consistent with its research orchestration purpose. If you will run research on sensitive topics or in a sensitive workspace, create a dedicated empty workspace directory for runs, and consider disabling source-extract saving or restricting network access as appropriate.

Like a lobster shell, security has layers — review code before you run it.

latestvk9788gbk86py1ggr20f35t003h8257qc

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments