Agent Orchestrate

v1.0.0

Multi-agent orchestration patterns for OpenClaw. Quick reference for spawning sub-agents, parallel work, and basic coordination. Use when: simple parallel ta...

0· 586·0 current·0 all-time
byMolten Bot 000@moltenbot000
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (multi-agent orchestration) matches the instructions. All referenced operations are orchestration primitives (sessions_spawn, subagents, sessions_send, sessions_history) and local state files; there are no unrelated binaries, credentials, or external endpoints required.
Instruction Scope
SKILL.md contains pseudocode and patterns for spawning, polling, steering, killing, and collecting results, and for persisting orchestration state to local JSON/files. It does not instruct reading arbitrary system files, accessing unrelated environment variables, or sending data to unknown external endpoints. Human-in-the-loop messaging is limited to platform primitives (sessions_send).
Install Mechanism
No install spec and no code files beyond documentation — instruction-only. This is the lowest-risk install model (nothing is downloaded or written by an installer).
Credentials
The skill declares no required environment variables, credentials, or config paths. The instructions also do not reference hidden secrets or external service keys. This is proportionate for a coordination/reference skill.
Persistence & Privilege
always:false and no requests to modify other skills or global agent settings. The skill suggests the agent may spawn subagents (normal for orchestration); autonomous invocation is allowed by platform default but the skill itself does not demand elevated persistence or cross-skill access.
Assessment
This skill is a documentation/reference pack for orchestrating sub-agents and is internally coherent. Before installing: (1) Confirm your OpenClaw environment provides the referenced primitives (sessions_spawn, subagents, sessions_send, sessions_history) — otherwise the instructions are only theoretical. (2) Be aware orchestrations write local state/checkpoint files (e.g., orchestration-state.json, pipeline-state/). Avoid storing secrets in those files and ensure appropriate file permissions. (3) Orchestrations may spawn many subagents and incur compute/costs — test with quotas/limits in a sandbox. (4) Because it is instruction-only and platform-dependent, review how subagents interact with external services (tasks you spawn may cause those subagents to call external APIs); limit agent permissions if you want to constrain blast radius. Overall this appears to be a benign, proportionate reference guide.

Like a lobster shell, security has layers — review code before you run it.

latestvk9705zpphkw4wn3yhys6c8vpfs81k2fr
586downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Agent Orchestration — Quick Reference

Simple patterns for multi-agent coordination. For advanced dynamic orchestration, see cord-trees.

Core Primitives

ToolPurpose
sessions_spawnCreate isolated sub-agent with task
subagents listCheck status of running agents
subagents steerSend guidance to running agent
subagents killTerminate an agent
sessions_sendMessage another session

Spawn vs Fork

Two context strategies for sub-agents:

Spawn (Clean Slate)

Sub-agent gets only its task prompt. No parent context.

Use when:
- Task is self-contained
- You want isolation (no context bleed)
- Subtask doesn't need sibling results
- Cheaper/faster (smaller context)

Example: "Research competitor X" — doesn't need to know about competitors Y and Z.

Fork (Context-Inheriting)

Sub-agent receives accumulated results from siblings.

Use when:
- Synthesis/analysis across prior work
- Task builds on what others discovered
- Final integration step

Implementation: Include sibling results in the task prompt:

Task: Synthesize findings into recommendation.

Prior research:
- Competitor A: [result from agent 1]
- Competitor B: [result from agent 2]
- Market trends: [result from agent 3]

Patterns

1. Parallel Fan-Out

Spawn N independent agents, wait for all to complete.

# Pseudocode
tasks = ["research A", "research B", "research C"]
for task in tasks:
    sessions_spawn(task=task, label=f"research-{i}")

# Poll until all complete
while not all_complete(subagents list):
    wait(30s)

# Collect results from session histories

See: references/fan-out.md

2. Pipeline (Sequential)

Each agent's output feeds the next.

Agent 1: Research → 
  Agent 2: Analyze (using research) → 
    Agent 3: Write (using analysis)

Implementation: Spawn agent 1, wait for completion, spawn agent 2 with agent 1's result, etc.

See: references/pipeline.md

3. Dependency Tree

Tasks with explicit dependencies. Don't start X until Y completes.

#1 Research API surface
#2 Research GraphQL tradeoffs  
#3 Analysis (blocked-by: #1, #2)
#4 Recommendation (blocked-by: #3)

Implementation: Track state in a JSON file. Poll and spawn when dependencies clear.

See: references/dependency-tree.md

4. Human-in-the-Loop

Pause workflow for human input at checkpoints.

Agent 1: Draft proposal →
  [CHECKPOINT: Human approves/rejects] →
    Agent 2: Implement approved proposal

Implementation: Agent 1 completes, orchestrator messages human via sessions_send or channel message, waits for response before spawning agent 2.

5. Supervisor Pattern

Orchestrator monitors agents and intervenes when stuck.

while agents_running:
    status = subagents list
    for agent in status:
        if stuck_too_long(agent):
            subagents steer(target=agent, message="Try alternative approach...")
        if clearly_failed(agent):
            subagents kill(target=agent)
            # Retry or escalate

State Management

For complex orchestrations, track state in a file:

// orchestration-state.json
{
  "tasks": {
    "research-a": {"status": "complete", "result": "...", "sessionKey": "..."},
    "research-b": {"status": "running", "sessionKey": "..."},
    "synthesis": {"status": "blocked", "blockedBy": ["research-a", "research-b"]}
  }
}

Update after each spawn, completion check, or state change.

Best Practices

  1. Label agents clearly — Use descriptive labels for subagents list readability
  2. Set timeouts — Use runTimeoutSeconds to prevent runaways
  3. Don't over-parallelize — More agents ≠ better. Consider token costs.
  4. Checkpoint expensive work — Write intermediate results to files
  5. Handle failures — Decide: retry, skip, or escalate to human
  6. Keep tasks focused — One clear goal per agent. Easier to debug.

Anti-Patterns

❌ Polling in tight loops — Use reasonable intervals (30s+) ❌ Spawning agents for trivial tasks — Just do it yourself ❌ Giant context dumps — Summarize, don't copy entire histories ❌ No failure handling — Agents fail. Plan for it.

Choosing a Pattern

SituationPattern
N independent research tasksFan-out
Step A → Step B → Step CPipeline
Complex task with prerequisitesDependency tree
Need human approval mid-flowHuman-in-the-loop
Long-running with potential issuesSupervisor
Simple one-off subtaskJust spawn one agent

Quick Reference

# Spawn a sub-agent
sessions_spawn(task="Do X", label="my-task", runTimeoutSeconds=300)

# Check status
subagents(action="list")

# Send guidance
subagents(action="steer", target="my-task", message="Focus on Y instead")

# Kill runaway
subagents(action="kill", target="my-task")

Comments

Loading comments...