Debate Learning Workflow

v1.0.1

Run evidence-backed multi-agent debates (A/B/Opponent3/Judge) with 20-40 rounds, loophole analysis, and universal actionable lesson extraction.

0· 160·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kdylan1010-alt/debate-learning-workflow.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Debate Learning Workflow" (kdylan1010-alt/debate-learning-workflow) from ClawHub.
Skill page: https://clawhub.ai/kdylan1010-alt/debate-learning-workflow
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install debate-learning-workflow

ClawHub CLI

Package manager switcher

npx clawhub@latest install debate-learning-workflow
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (structured multi-agent debates, lesson extraction) align with the SKILL.md rules and outputs. No unexpected credentials, binaries, or external services are requested.
Instruction Scope
Instructions are narrowly focused on running debate rounds, logging loopholes, judge decisions, and producing lesson artifacts. They explicitly write output files to ~/Desktop (topic, daily index, lessons). This is coherent with the skill's purpose but means the agent will create/modify files in the user's home directory; the SKILL.md does not instruct reading unrelated system files or environment variables.
Install Mechanism
No install spec and no code files are present (instruction-only). This minimizes risk because nothing is downloaded or installed on disk by the skill itself.
Credentials
No environment variables, credentials, or config paths are required. The declared requirements are proportional to the described functionality.
Persistence & Privilege
always is false and there is no request to modify other skills or system-wide agent settings. The skill will write its own output files but requests no elevated or persistent platform privileges.
Assessment
This skill is coherent with its description: it runs structured debates and writes files to ~/Desktop (topic files, daily index, lessons). Before installing or invoking it, consider: 1) local file writes — review or redirect the output paths if you don't want files on your Desktop; 2) potential volume — 20–40 rounds per topic can produce large notebooks of content; 3) evidence sourcing — the skill requires Claim+Evidence+Source in openings, so an agent using web searches could fetch or quote external sources (the SKILL.md does not explicitly require network access, but evidence collection may cause the agent to access the web); 4) review outputs for sensitive content before sharing. No credentials or installs are requested, and nothing in the instructions appears to attempt data exfiltration or access to unrelated system configuration.

Like a lobster shell, security has layers — review code before you run it.

debatevk977k86c428846ecp6x4qh04mx83h4a7latestvk977k86c428846ecp6x4qh04mx83h4a7workflowvk977k86c428846ecp6x4qh04mx83h4a7
160downloads
0stars
2versions
Updated 1mo ago
v1.0.1
MIT-0

Debate Learning Workflow

Use this skill to run structured, high-rigor debates that produce transferable learning.

Core Rules

  • Minimum 20 rounds per topic.
  • Judge-gated continuation if critical loopholes remain.
  • Hard stop at 40 rounds.
  • Openings must include Claim + Evidence + Source.
  • Unsupported openings are invalid and must be revised.
  • Every round logs:
    • loophole found
    • why loophole exists
    • concrete fix for next round

Roles

  • Debater A
  • Debater B
  • Opponent3 (alternative/challenger model)
  • Judge (evidence quality + unresolved uncertainty)

Output Files

  • Topic file: ~/Desktop/debate/YYYY-MM-DD-topic.md
  • Daily index: ~/Desktop/debate/YYYY-MM-DD-index.md
  • Lessons append target: ~/Desktop/lessons.md

Universalization Quality Gate

Every generalized lesson must include:

  1. Trigger condition
  2. Loophole/failure pattern
  3. Root cause
  4. Corrective action
  5. Measurable metric/threshold
  6. Boundary conditions
  7. Transfer examples in at least 2 other fields

If any field is missing, lesson is INVALID and must be rewritten.

Comments

Loading comments...