Install
openclaw skills install researching-in-parallelResearch any topic thoroughly by running three sub-agents with distinct analytical lenses (breadth, critique, evidence), then giving their outputs to a final...
openclaw skills install researching-in-parallelTrigger when the user asks to:
Do not trigger for quick factual questions, casual conversation, or anything answerable from memory in a few sentences. This skill takes significant elapsed time and and incurs meaningful token cost.
Confirm with the user:
save_source_extracts in skill-config.json accordingly.You may complete all preparatory work (reading config, checking the workspace, identifying provided sources) without waiting for confirmation. The confirmation gate below covers the spawn decision only.
If the user has provided starting-point sources, write a sources-provided-[TOPIC_SLUG]-[DATE].md file inside the workspace location ([WORKSPACE_PATH]/sources-provided-[TOPIC_SLUG]-[DATE].md) before spawning. This file is part of the reproducible research record.
Format:
# Provided Sources: [TOPIC]
*Assembled [DATE] for research run [TOPIC_SLUG]-[DATE]*
[Full path or URL to each source, one per line, with a brief note on what it is]
If no sources were provided, skip this step. Sub-agents will check for this file — if absent they proceed with open research only.
If so, make a copy of the existing report in the workspace location, and copy the sources-provided-* file associated with that report.
Read skill-config.json now. Note the prompt_template and model assigned to each role.
Initial agents
| Label | Role | Config |
|---|---|---|
research-breadth | Breadth sweep | subagents.breadth |
research-critical | Critical lens | subagents.critical |
research-evidence | Evidence pass | subagents.evidence |
Final agent (one only) If the user wants an existing report to be updated, use the Updater sub-agent. For a new report, use Synthesis.
| Label | Role | Config |
|---|---|---|
review-report | Synthesis | subagents.synthesis |
review-report-updated | Updater | subagents.updater |
You must make it clear to the user which models will be used for each agent, and alert the user to homogeneity. If any model parameter is null in skil-config.json, attempt to identify which models are available and ask the user to select one.
Assemble each sub-agent's task prompt by combining the files specified.
{{INSERT: shared-blocks.md > Block: [name]}} marker, paste the corresponding block from shared-blocks.md verbatim{{PLACEHOLDERS}} with values from skill-config.json and the confirmed brief{{INJECT_CONTEXT}}, insert any run-specific instructions relevant to this role (e.g. gaps identified in a prior run, specific angles to prioritise). Delete the section if you have nothing to add.{{REPORT_TO_EDIT_PATH}}[WORKSPACE_PATH] to ensure the sub-agent can find them.[WORKSPACE_PATH].Before spawning sub-agents: Present the following to the user and wait for explicit confirmation before proceeding to Step 3a:
The user may override any of these. If the user asks for changes, redo whatever is needed to comply, then ask again for explicit confirmation to proceed. Sub-agents are resource-intensive and costly - do not spawn sub-agents until the user confirms.
Get the parameters defined in skill-config.json: subagents.params
Call sessions_spawn for the sub-agents. For task, use the prompt file you created for that agent. Respect the subagents.maxConcurrent parameter.
When a sub-agent terminates for any reason, check for the existence of its expected outputs at the file path(s) specified. Do not proceed to Step 4 with missing outputs — investigate and resolve first.
Consolidate all SOURCES sections from the three research outputs into bibliography-[TOPIC_SLUG]-[DATE].md inside the workspace. Deduplicate across passes. Flag single-pass-only sources. Add the BibTeX block.
Follow the structure in assets/bibliography-template.md exactly. The template includes:
Populate all columns. Do not omit the Access or AI-generated columns — these were present in the sub-agent SOURCES sections.
Spawn a dedicated synthesis or updater sub-agent. Call sessions_spawn to spawn the sub-agent. For task, use the prompt file you created for the agent.
If the sub-agent fails to deliver in full or times out, investigate and resolve. Do not change the model without explaining the issue to the user and getting approval.
When the final sub-agent completes and has delivered a report file, announce completion in chat. Confirm all workspace artifacts are saved and give the user the file paths. Remind the user that the current outputs and bibliography can serve as provided sources for another research run.
For sub-agent task prompts, see references/prompts/ (one file per role plus shared-blocks.md).
For output templates, see assets/report-template.md and assets/bibliography-template.md.
For model assignments, see skill-config.json.
For configuration and cost guidance, see references/configuration.md.