Superpowers Dispatching Parallel Agents

v1.0.0

Dispatch independent tasks to focused agents working concurrently on isolated problems without shared state or sequential dependencies for faster resolution.

0· 178·2 current·2 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description match the SKILL.md content: the document describes how to spawn focused subagents to tackle independent test/code problems in parallel. There are no declarations (env, binaries, or installs) that are out of line with a coordination/prompting pattern.
Instruction Scope
Instructions tell agents to read specific test files, identify root causes, and make focused fixes — this is appropriate for a developer troubleshooting pattern. Note: the skill explicitly instructs agents to read and change project files (tests/production code) when tasked, so users should be aware agents following this may modify repository contents; the SKILL.md includes constraints (e.g., 'Do NOT change production code') which help limit scope.
Install Mechanism
No install spec and no code files — lowest-risk model: nothing is written to disk or fetched at install time.
Credentials
No environment variables, credentials, or config paths are requested. The skill does not ask for secrets or unrelated service tokens.
Persistence & Privilege
always:false and default agent invocation are used. This is standard for skills; it does not request elevated or persistent platform privileges and does not modify other skills or global configuration.
Assessment
This skill is an instruction-only pattern for coordinating parallel developer agents and is internally consistent. Before using it: limit the agent's repository scope and avoid giving it unrelated secrets; on first runs, require human review of proposed changes (the SKILL.md already recommends reviewing summaries and running the full test suite); apply it only to truly independent problems (shared-state bugs should be handled sequentially); and consider running agent edits in a branch or sandbox so you can review/CI-test changes before merging.

Like a lobster shell, security has layers — review code before you run it.

latestvk970pvq6e57nwrtey08fryjbds83kkjr
178downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0
<!-- Original: https://github.com/obra/superpowers, MIT License -->

name: superpowers-dispatching-parallel-agents description: Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies version: 1.0.0

Dispatching Parallel Agents

Overview

You delegate tasks to specialized agents with isolated context. By precisely crafting their instructions and context, you ensure they stay focused and succeed at their task. They should never inherit your session's context or history — you construct exactly what they need. This also preserves your own context for coordination work.

When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.

Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.

When to Use

Multiple failures?
  → Are they independent?
    → yes → Can they work in parallel?
      → yes → Parallel dispatch
      → no (shared state) → Sequential agents
    → no (related) → Single agent investigates all

Use when:

  • 3+ test files failing with different root causes
  • Multiple subsystems broken independently
  • Each problem can be understood without context from others
  • No shared state between investigations

Don't use when:

  • Failures are related (fix one might fix others)
  • Need to understand full system state
  • Agents would interfere with each other

The Pattern

1. Identify Independent Domains

Group failures by what's broken:

  • File A tests: Tool approval flow
  • File B tests: Batch completion behavior
  • File C tests: Abort functionality

Each domain is independent — fixing tool approval doesn't affect abort tests.

2. Create Focused Agent Tasks

Each agent gets:

  • Specific scope: One test file or subsystem
  • Clear goal: Make these tests pass
  • Constraints: Don't change other code
  • Expected output: Summary of what you found and fixed

3. Dispatch in Parallel

Spawn subagents simultaneously — each handles one domain:

Agent 1 → Fix agent-tool-abort.test.ts failures
Agent 2 → Fix batch-completion-behavior.test.ts failures
Agent 3 → Fix tool-approval-race-conditions.test.ts failures

All three run concurrently.

4. Review and Integrate

When agents return:

  • Read each summary
  • Verify fixes don't conflict
  • Run full test suite
  • Integrate all changes

Agent Prompt Structure

Good agent prompts are:

  1. Focused — One clear problem domain
  2. Self-contained — All context needed to understand the problem
  3. Specific about output — What should the agent return?
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:

1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0

These are timing/race condition issues. Your task:

1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
   - Replacing arbitrary timeouts with event-based waiting
   - Fixing bugs in abort implementation if found
   - Adjusting test expectations if testing changed behavior

Do NOT just increase timeouts - find the real issue.

Return: Summary of what you found and what you fixed.

Common Mistakes

MistakeFix
Too broad: "Fix all the tests"Specific: "Fix agent-tool-abort.test.ts"
No context: "Fix the race condition"Paste error messages and test names
No constraints: Agent might refactor everything"Do NOT change production code" or "Fix tests only"
Vague output: "Fix it""Return summary of root cause and changes"

Real Example from Session

Scenario: 6 test failures across 3 files after major refactoring

Failures:

  • agent-tool-abort.test.ts: 3 failures (timing issues)
  • batch-completion-behavior.test.ts: 2 failures (tools not executing)
  • tool-approval-race-conditions.test.ts: 1 failure (execution count = 0)

Decision: Independent domains — abort logic separate from batch completion separate from race conditions

Dispatch:

Agent 1 → Fix agent-tool-abort.test.ts
Agent 2 → Fix batch-completion-behavior.test.ts
Agent 3 → Fix tool-approval-race-conditions.test.ts

Results:

  • Agent 1: Replaced timeouts with event-based waiting
  • Agent 2: Fixed event structure bug (threadId in wrong place)
  • Agent 3: Added wait for async tool execution to complete

Integration: All fixes independent, no conflicts, full suite green

Time saved: 3 problems solved in parallel vs sequentially

Key Benefits

  1. Parallelization — Multiple investigations happen simultaneously
  2. Focus — Each agent has narrow scope, less context to track
  3. Independence — Agents don't interfere with each other
  4. Speed — 3 problems solved in time of 1

Verification

After agents return:

  1. Review each summary — Understand what changed
  2. Check for conflicts — Did agents edit same code?
  3. Run full suite — Verify all fixes work together
  4. Spot check — Agents can make systematic errors

Comments

Loading comments...