Install
openclaw skills install quorumMulti-agent validation framework — 6 independent AI critics evaluate artifacts against rubrics with evidence-grounded findings.
openclaw skills install quorumQuorum validates AI agent outputs by spawning multiple independent critics that evaluate artifacts against rubrics. Every criticism must cite evidence. You get a structured verdict.
Clone the repository and install:
git clone https://github.com/SharedIntellect/quorum.git
cd quorum/reference-implementation
pip install -r requirements.txt
Run a quorum check on any file:
python -m quorum.cli run --target <path-to-artifact> --rubric <rubric-name>
research-synthesis — Research reports, literature reviews, technical analysesagent-config — Agent configurations, YAML specs, system promptspython-code — Python source files (25 criteria, PC-001–PC-025; auto-detected on .py files)quick — 2 critics (correctness, completeness) + pre-screen, ~5-10 minstandard — 4 active (correctness, completeness, security + tester) + pre-screen, ~15-30 min (default)thorough — 5 active (+ code_hygiene) + pre-screen + fix loops, ~30-60 min†Cross-Consistency requires --relationships flag with a relationships manifest.
All depth profiles include the deterministic pre-screen (10 checks: credentials, PII, syntax errors, broken links, TODOs, and more) before any LLM critic runs.
# Validate a research report
quorum run --target my-report.md --rubric research-synthesis
# Quick check (faster, fewer critics)
quorum run --target my-report.md --rubric research-synthesis --depth quick
# Batch: validate all markdown files in a directory
quorum run --target ./docs/ --pattern "*.md" --rubric research-synthesis
# Cross-artifact consistency check
quorum run --target ./src/ --relationships quorum-relationships.yaml --depth standard
# Use a custom rubric
quorum run --target my-spec.md --rubric ./my-rubric.json
# List available rubrics
quorum rubrics list
# Initialize config interactively
quorum config init
On first run, Quorum prompts for your preferred models and writes quorum-config.yaml. You can also create it manually:
models:
tier_1: anthropic/claude-sonnet-4-6 # Judgment roles
tier_2: anthropic/claude-sonnet-4-6 # Evaluation roles
depth: standard
Set your API key:
export ANTHROPIC_API_KEY=sk-ant-...
# or
export OPENAI_API_KEY=sk-...
Quorum produces a structured verdict:
Exit codes: 0 = PASS/PASS_WITH_NOTES, 1 = error, 2 = REVISE/REJECT.
Each finding includes: severity (CRITICAL/HIGH/MEDIUM/LOW), evidence citations pointing to specific locations in the artifact, and remediation suggestions. The run directory contains prescreen.json, per-critic finding JSONs, verdict.json, and a human-readable report.md.
⚖️ LICENSE — Not part of the operational specification above. This file is part of Quorum. Copyright 2026 SharedIntellect. MIT License. See LICENSE for full terms.