Install
openclaw skills install team-discussMulti-agent tool for structured, multi-round discussions with dialectic analysis, random speaking order, shared state, and integration of real sub-agents for...
openclaw skills install team-discussMulti-agent collaborative discussion tool for efficient collaboration and alignment.
from core import SharedStore, DiscussionOrchestrator
from models import Discussion, DiscussionConfig, Participant, AgentRole
# Initialize
store = SharedStore(base_dir="./discussions")
orchestrator = DiscussionOrchestrator(store)
# Create discussion
discussion = Discussion(
id="my-discussion-001",
topic="Which storage layer should we use?",
description="SQLite vs PostgreSQL technology selection",
max_rounds=3,
config=DiscussionConfig(consensus_threshold=0.75),
participants=[
Participant(agent_id="architect", role_id=AgentRole.ARCHITECT),
Participant(agent_id="backend", role_id=AgentRole.DEVOPS),
]
)
store.create_discussion(discussion)
async def agent_callback(discussion_id, round_num, previous_messages):
# Build prompt
prompt = build_prompt(round_num, previous_messages)
# Call real agent
response = await sessions_spawn(
runtime="subagent",
agentId="architect",
mode="run",
task=prompt
)
return response, MessageType.PROPOSAL
callbacks = {
"architect": agent_callback,
"backend": agent_callback,
}
# Run discussion
result = await orchestrator.run_discussion(discussion.id, callbacks)
# View results
print(f"Status: {result.status}")
print(f"Rounds: {result.current_round}")
print(f"Consensus: {result.consensus_level}")
from core import DialecticEngine
dialectic = DialecticEngine()
analysis = dialectic.analyze_message(message, previous_messages)
print(f"Quality: {analysis.quality}") # strong/moderate/weak/fallacious
print(f"Score: {analysis.score}")
print(f"Citation: {analysis.has_citation}")
print(f"Fallacies: {analysis.fallacies}")
ad_hominem - Personal attackstraw_man - Straw man fallacyfalse_dichotomy - False dilemmahasty_generalization - Hasty generalizationappeal_to_authority - Appeal to authorityslippery_slope - Slippery slope# First round random shuffle, subsequent rounds rotate
order = coordinator.determine_speaking_order(
participants,
SpeakingOrder.ROUND_ROBIN
)
From round 2, agents must cite opponent's original words:
I disagree with @architect's view:
> "Choosing PostgreSQL is not premature optimization"
This statement is misleading...
Assign an agent to play devil's advocate:
# Assign tester as Devil's Advocate
# Even if they agree internally, they must defend the minority position
team-discuss/
├── src/
│ ├── core/
│ │ ├── shared_store.py # Shared state storage
│ │ ├── orchestrator.py # Multi-round orchestrator
│ │ ├── dialectic.py # Dialectical logic engine
│ │ └── coordinator.py # Coordinator logic
│ ├── agents/
│ │ └── bridge.py # Agent bridge
│ └── models.py # Data models
├── examples/
│ └── run_real_discussion.py # Real discussion example
└── tests/
└── test_integration.py # Integration tests
DiscussionConfig(
max_rounds=5, # Maximum rounds
min_rounds_before_consensus=2, # Minimum rounds before consensus
consensus_threshold=0.75, # Consensus threshold (75% agreement)
token_budget=50000, # Token budget
)
SpeakingOrder.FREE # Free speaking (random)
SpeakingOrder.ROUND_ROBIN # Round robin (recommended)
SpeakingOrder.ROLE_BASED # Role-based priority
# Run example
cd /root/.openclaw/workspace/data/projects/team-discuss
python3 examples/run_real_discussion.py
# Create a philosophical discussion
discussion = Discussion(
id="philosophy-debate-001",
topic="Does free will exist, or is everything determined?",
description="Philosophical debate on free will vs determinism",
max_rounds=3,
participants=[
Participant(agent_id="philosopher1", role_id=AgentRole.REVIEWER),
Participant(agent_id="scientist", role_id=AgentRole.ARCHITECT),
Participant(agent_id="skeptic", role_id=AgentRole.TESTER),
]
)
Philosophical debates benefit from:
Sample output:
🔄 Round 1 started
💬 @architect (Architect):
I support using PostgreSQL...
📊 Quality: moderate (70.0 points)
💬 @backend (Backend Dev):
I support using SQLite...
📊 Quality: moderate (60.0 points)
✅ Round 1 ended
🔄 Round 2 started
💬 @architect:
Responding to @backend:
> "Premature optimization is the root of all evil"
This statement confuses...
📊 Quality: strong (85.0 points)
📌 Citation: ✓
✅ Round 2 ended
✓ Discussion completed!
Final status: max_rounds_reached
Consensus level: partial
min_rounds_before_consensus to prevent premature convergenceCONSENSUS_REACHED - Consensus reached, can execute directlyMAX_ROUNDS_REACHED - Requires human judgmentCOMPLETED - Discussion ended naturallyorchestrator = DiscussionOrchestrator(
store=store,
response_timeout=180 # Increase timeout
)
Shared storage uses optimistic locking, automatically retries on conflict.
store = SharedStore(base_dir="/path/to/discussions")
class MyAgentBridge:
async def generate_response(self, ...):
# Custom calling logic
pass
class MyDialecticEngine(DialecticEngine):
def _detect_fallacies(self, content):
# Add custom fallacy detection
pass
/root/.openclaw/workspace/data/projects/team-discussexamples/run_real_discussion.pytests/test_integration.py| Feature | Status | Description |
|---|---|---|
| Devil's Advocate | 🚧 In Development | Auto-assign minority role, ensure opposition voices heard |
| Stance Change Rewards | 🚧 In Development | Reward agents for rationally changing position |
| CLI Interface | 📋 Planned | Command-line tool for creating/viewing/managing discussions |
| REST API | 📋 Planned | HTTP API for remote calls |
| Web UI | 📋 Planned | Visual discussion dashboard |
📦 Published to clawhub.com
Team-Discuss 是一个多 Agent 协作讨论工具,支持多轮迭代、辩证逻辑分析、随机发言顺序等特性,帮助团队高效对齐方案。
cd /root/.openclaw/workspace/data/skills/team-discuss
python3 example.py
/root/.openclaw/workspace/data/projects/team-discuss/
📦 已发布到 clawhub.com