Install
openclaw skills install roundtable-adaptiveAdaptive multi-model AI roundtable. Runs up to 4 AI models (configurable) in 2 debate rounds with cross-critique and formal consensus scoring. Requires a configured Anthropic provider (Claude Opus recommended). Optionally adds GPT-5.3 Codex (OpenAI), Grok 4, and Gemini 3.1 Pro via Blockrun proxy. Works with Claude-only fallback if optional providers are unavailable. Writes results to local filesystem. Debate panel agents are persistent thread sessions; meta-panel and synthesis agents are one-shot.
openclaw skills install roundtable-adaptiveTrigger: roundtable [--mode] [prompt] from any channel your agent monitors.
Output: Posted to your configured output channel (set ROUNDTABLE_OUTPUT_CHANNEL in your OpenClaw config, or results are posted back to the triggering channel).
Panel agents: Persistent sessions (mode="session", thread=true) — stay alive in the Discord thread for follow-up questions. Meta-panel analysts and synthesis agent are one-shot (mode="run").
The orchestrator = COORDINATOR ONLY. Uses your default model unless overridden in panels.json. Never argues a position, never joins the panel.
Core principle: the Meta-Panel (4 premium models) designs the optimal WORKFLOW for the task — parallel debate, sequential pipeline, or hybrid — then the right agents execute it.
Before using, set your output channel in panels.json (or the triggering channel is used):
{
"output": {
"channel": "discord",
"target": "YOUR_CHANNEL_ID_HERE"
}
}
If using Discord threads (optional — creates one thread per roundtable for clean organization):
{
"output": {
"channel": "discord",
"target": "YOUR_CHANNEL_ID_HERE",
"useThreads": true
}
}
Without this config, results are posted directly to the channel where the command was issued.
| Component | Cost per full run |
|---|---|
| Claude Opus (OAuth) | Free |
| GPT-5.3 Codex (OAuth) | Free |
| Gemini 3.1 Pro (Blockrun) | ~$0.05 |
| Grok 4 (Blockrun) | ~$0.08 |
| Total (full panel) | ~$0.13–$0.50 |
| Degraded mode (Claude only) | Free |
--quick flag halves cost (1 round only).
Minimum (degraded mode — free):
anthropic provider in openclaw.json (OAuth or API key)openai-codex for GPT-5.3 CodexFull panel (adds Grok 4 + Gemini 3.1 Pro via Blockrun):
openclaw plugins install @blockrun/clawrouter then openclaw gateway restartResults are saved to {workspace}/memory/roundtables/YYYY-MM-DD-slug.json (created automatically).
You can configure a Discord channel as a roundtable-only channel in your AGENTS.md:
Any message in channel [YOUR_CHANNEL_ID] → treat as a roundtable topic automatically.
No prefix needed. Message → auto-detect mode → create thread → spawn orchestrator.
This is entirely optional — the explicit roundtable command works from any channel.
roundtable [prompt] — auto-detect mode, full flowroundtable --debate [prompt] — force parallel debate moderoundtable --build [prompt] — force build/coding moderoundtable --redteam [prompt] — force adversarial moderoundtable --vote [prompt] — force decision moderoundtable --quick [prompt] — skip meta-panel, use default panel for mode, 1 round onlyroundtable --panel model1,model2,model3 [prompt] — manual panel override, skip meta-panelroundtable --validate [prompt] — add Round 3 agent validation of synthesisroundtable --no-search [prompt] — skip web search (use only for purely theoretical/abstract topics)Before anything else, create a thread in your configured channel and save the thread ID.
Avoid double-spawn if the same topic is triggered twice.
message(action='thread-list', channel='discord', channelId='[CHANNEL_ID]', limit=25)
[[DEBATE]]) created in last 24h:
THREAD_ID = existing_thread_id)♻️ Duplicate topic detected — reusing existing thread.message(
action = 'thread-create',
channel = '[your configured channel]',
channelId = '[CHANNEL_ID from user config]',
threadName = '🎯 [topic — max 8 words] [[MODE]]',
message = '**Panel:** [model list]\n**Mode:** [mode] | **Rounds:** [N]\n⏳ Analysis in progress...'
)
Save the returned thread ID as THREAD_ID.
All subsequent message() calls use target = THREAD_ID, NOT the channel ID.
If thread creation fails or channel is not configured: fall back to posting directly in the active channel.
Run a web search on the topic before anything else — meta-panel and all agents will have current context.
web_search(query = prompt, count = 5)
Timeout policy: If web_search returns no result or errors within ~10s, do NOT block — continue immediately with CURRENT_CONTEXT = "No real-time data available (search failed or timed out).". The roundtable proceeds on model knowledge only.
Caching: If re-running the same topic within the same session, reuse the prior CURRENT_CONTEXT block — do not re-search.
Summarize results into a CURRENT_CONTEXT block (max 250 words):
This block is injected into:
Skip if: --panel flag used, OR --quick flag used.
Read panels.json → meta.models. For each:
sessions_spawn(
task = filled prompts/meta-panel.md,
model = model_id,
mode = "run",
label = "rt-meta-[A/B/C/D]",
runTimeoutSeconds = 90
)
After collecting all meta responses, the orchestrator synthesizes the final workflow:
Workflow type: majority vote among 4 recommendations
hybrid (more flexible)Stage composition: tally model recommendations per stage
agents.defaults.models allowlist → skip, use nextRounds: median of recommendations (round up if tie) — hard cap at 3 max, always
Synthesis model: most-recommended premium model not on the main panel
Log the decision (include in output header):
"Meta-panel designed workflow: [type]. Stages: [N]. Panel: [models]. Synthesis: [model]."
parallel_debate — classic roundtable
sequential — output chains between stages
hybrid — parallel within stages, sequential between
If any agent fails and fallback is SAME MODEL FAMILY → log:
⚠️ PANEL DEGRADED — [role] substituted [original] with [fallback] (same family: [family])
Always surface this in META section of final output with actionable guidance:
--panel budget for stable 2-model run"agents.defaults.models in openclaw.json"| Mode | Keywords |
|---|---|
| debate | pros/cons, tradeoff, should we, ethics, compare, opinion, better |
| build | implement, code, architecture, build, design, develop, create |
| redteam | attack, vulnerability, failure, risk, break, threat, exploit |
| vote | choose, decide, which one, best option, select, recommend between |
| default | anything else |
Round 1: Spawn all panel agents in parallel as persistent thread-bound sessions.
sessions_spawn(
task = filled prompts/round1.md,
model = model_id,
mode = "session", ← persistent — stays alive in the thread
label = "rt-[role]",
thread = true ← bound to the thread from Step -1
)
{ "attacker": sessionKey, "defender": sessionKey, ... }Round 2 (if rounds ≥ 2): Send cross-critique prompt to each existing session via sessions_send.
[SELF_DIGEST] = this agent's own digest from Round 1[PEER_DIGESTS] = other agents' digests (labeled with role)Round 3 (if --validate): See Step 4.
Stage 1: Spawn agents in parallel as persistent sessions (mode="session", thread=true).
prompts/round1.md.sessions_send to existing sessions (no re-spawn).Stage 2: Spawn new persistent sessions (mode="session", thread=true).
prompts/round1.md base + prepend Stage 1 outputs as contextStage 1: Parallel persistent sessions (mode="session", thread=true), each with a different sub-task.
"Your specific task for this stage: [task from workflow design]"
Stage 2: 1-2 new persistent sessions (mode="session", thread=true) with all Stage 1 outputs embedded.
prompts/round1.md base + "You are integrating and synthesizing the work of multiple agents. Their outputs: [all Stage 1 outputs]"After Round 2 (parallel_debate) or Stage 2 (sequential/hybrid):
Extract AGREEMENT SCORES from each agent's Round 2 response.
Build score matrix: { agent_role: { peer_role: score_1_to_5 } }
Consensus % = (sum of all scores / (n_scores × 5)) × 100
If no Round 2 scores (quick mode / sequential): omit consensus %, mark as "N/A"
Note on Round 3: Round 3 validation uses ACCURATE/PARTIALLY/INACCURATE — this is a separate metric from consensus %. Round 3 checks synthesis fidelity, not inter-agent agreement. Do NOT mix these two metrics. Consensus % comes only from Round 2 scores; Round 3 result appears separately in the META block as
Validated: yes/no/partial.
--validate flag only)When to recommend --validate to the user:
When NOT to use it: Quick mode, debate on subjective topics, or when time matters more than precision.
Draft synthesis first (Step 5 below), but do NOT post.
Spawn validation agents:
sessions_spawn(
task = filled prompts/round3-validation.md,
model = original agent model,
label = "rt-r3-validate-[role]",
runTimeoutSeconds = 60
)
Tally:
⚠️ [Role] flagged misrepresentation: [correction summary]Validated: yes or Validated: partial in METANever write synthesis yourself.
sessions_spawn(
task = filled prompts/final-synthesis.md,
model = [synthesis model from meta-panel recommendation, or anthropic/claude-opus-4-6 as default],
label = "rt-synthesis",
mode = "run",
runTimeoutSeconds = 180
)
Fill prompts/final-synthesis.md placeholders:
[ROUND1_SUMMARIES] → all self-digests: "[ROLE] ([model]): [digest]"[ROUND2_SUMMARIES] → critiques: "[ROLE] criticized [peer]'s [claim] because [reason]"[CONSENSUS_SCORES] → full score matrix + calculated %[DISCORD_THREAD_ID] → the THREAD_ID from Step -1 (synthesis agent posts here)Post to Discord using THREAD_ID from Step -1 (not the channel ID). All round outputs and the final synthesis go into the same thread.
Save to {workspace}/memory/roundtables/YYYY-MM-DD-[topic-slug].json:
{
"date": "YYYY-MM-DD",
"topic": "[prompt]",
"mode": "[mode]",
"workflow_type": "parallel_debate|sequential|hybrid",
"stages": [{ "model": "...", "role": "...", "task": "..." }],
"meta_panel_recommendation": "[summary of meta votes]",
"panel_degraded": false,
"panel_degradation_notes": "",
"consensus_pct": "XX% or N/A",
"synthesis_model": "[model]",
"validated": "yes|no|partial",
"elapsed_time_sec": 0,
"synthesis": "[final synthesis text]"
}
Also append one JSONL line to {workspace}/memory/roundtables/scorecard.jsonl with:
ts, topic, mode, workflow_type, elapsed_time_sec, consensus_pct, validated, panel_degraded.
| Situation | Action |
|---|---|
| Web search fails | Continue with note "No real-time context available" in all prompts |
--no-search flag | Skip Step 0 web search entirely |
| Meta-panel all fail | Use default panel for detected mode, log warning |
--quick | Skip meta-panel + round 2. Always uses parallel_debate workflow. Spawns default panel for detected mode (3 models). Synthesizes after round 1 only. |
--panel override | Skip meta-panel, use specified models, default to parallel_debate |
| Fallback = same family | Continue + log PANEL DEGRADED warning in META |
| Both model and fallback fail | Skip agent, note in META — do not wait, do not block |
| No blockrun configured | Warn user: "Blockrun not available. Using budget panel. Full panel requires Blockrun at localhost:8402." Auto-switch to budget profile from panels.json. |
| Agent timeout (any round) | FAIL-CONTINUE: treat as absent, mark [TIMEOUT] in META, proceed with surviving agents |
| Agent fails mid-Round 2 | Use its Round 1 digest as final position, omit its scores from consensus calculation |
| Synthesis agent fails | Orchestrator writes synthesis, note: "Synthesis by orchestrator (bias risk — no neutral model available)" |
| Stage 2 agent fails | Note in META, synthesize with Stage 1 only |
| 0 agents respond | Report failure, suggest retry |
| 1 agent responds | Skip Round 2 (no peers), synthesize from Round 1 only, mark consensus "N/A" |
--context-from SLUG | Load {workspace}/memory/roundtables/[slug].json, extract synthesis field, prepend to CURRENT_CONTEXT as "PRIOR ROUNDTABLE CONTEXT: [synthesis]". If file not found: warn and continue without prior context. |
When filling prompt templates, apply this rule for every [PLACEHOLDER]:
| Placeholder | If missing/failed | Action |
|---|---|---|
[CURRENT_CONTEXT] | Web search failed | Insert: "No real-time context available." |
[SELF_DIGEST] | Agent timed out R1 | Skip agent entirely from R2 |
[PEER_DIGESTS] | All peers failed | Skip R2, go to synthesis directly |
[ROUND1_SUMMARIES] | No R1 outputs | Abort with error: "0 agents responded" |
[ROUND2_SUMMARIES] | Quick mode / no R2 | Insert: "No cross-critique (quick mode or single round)" |
[CONSENSUS_SCORES] | No scores extracted | Insert: "N/A — scores not available" |
[SYNTHESIS_DRAFT] | Synthesis failed | Skip R3, note in META |
Never leave a [PLACEHOLDER] unfilled in a prompt. Unfilled placeholders confuse models and produce garbage output.
Agents write scores in free text. Extract scores with this heuristic:
SCORES: block- [Role]: X/5 — extract integer X (1–5)[SCORE INFERRED] in META
Do NOT crash the workflow on a malformed score block.debate: [opus-4.6, gpt-5.3-codex, gemini-3.1-pro, grok-4] → Advocate / Devil's Advocate / Analyst / Contrarian
build: [opus-4.6, gemini-3.1-pro, grok-4, gpt-5.3-codex] → Architect / Reviewer / Engineer / Implementer
redteam: [opus-4.6, gemini-3.1-pro, grok-4, gpt-5.3-codex] → Defender / Analyst / Attacker / Red Teamer
vote: [opus-4.6, gemini-3.1-pro, grok-4, gpt-5.3-codex] → 4-way vote panel
(all via blockrun/ prefix — see panels.json for exact model IDs and fallbacks)