Council Pilot — Autonomous Pipeline
Build a fully automated expert-driven project from a single idea. The pipeline discovers experts, distills their public knowledge, forms a council, scores maturity, builds code, debugs, and iterates until the council awards 100/100. Then submits to GitHub.
Core Rule
Distill methods, evidence preferences, reasoning habits, critique patterns, and blind spots from PUBLIC sources only. Do NOT impersonate living persons, invent private beliefs, fabricate quotes, or treat expert profiles as primary evidence. Expert memory is an analysis lens, not truth.
Quick Start
# Full autonomous pipeline
python3 scripts/expert_distiller.py init --root ./forum --domain "AI Reliability" --topic "LLM hallucination detection"
Then invoke this skill with the domain idea. The skill handles everything from discovery to GitHub submission.
Autonomous Pipeline: 10 Phases
Phases 1-4 run once (setup). Phases 5-9 iterate until convergence. Phase 10 runs once at completion.
INIT → DISCOVER → DISTILL → COUNCIL → SCORE
│
score < 100│
▼
GAP_FILL ← RESCORE ← DEBUG ← BUILD
│
│ needs new experts
▼
discover single → distill single → update council
│
│ score = 100 + all pass
▼
SUBMIT (terminal)
Phase 1: INIT
Goal: Parse user idea into domain spec, initialize forum root.
Steps:
- Parse the user's idea/concept into a domain name and topic description
- Run CLI:
python3 scripts/expert_distiller.py init --root <forum_root> --domain "<domain>" --topic "<topic>"
- Initialize pipeline state:
python3 scripts/expert_distiller.py build --root <forum_root> --domain "<domain>" --target-repo "<repo>"
- Write the domain's
coverage_axes — list 3-8 sub-domains the forum should cover
Output: Initialized forum root with domains/<domain_id>.json, directory layout, pipeline_state.json
Transition: → DISCOVER
Phase 2: DISCOVER
Goal: Web-search for expert candidates (3-8 people).
Steps:
- Generate search queries from the domain topic (see
agents/expert-researcher.md)
- For each query, use the current environment's web search tool to search
- For each result, use the current environment's web fetch/open tool to read candidate pages
- Identify real public figures with domain expertise
- Collect source URLs classified by tier (A/B/C per
references/source-gates.md)
- For each candidate, run CLI commands:
python3 scripts/expert_distiller.py candidate --root <root> --domain <domain> --name "<Name>" --reason "<why>"
python3 scripts/expert_distiller.py source --root <root> --expert-id <id> --tier A --title "<Title>" --url "<URL>" --note "<Note>"
python3 scripts/expert_distiller.py source --root <root> --expert-id <id> --tier B --title "<Title>" --url "<URL>" --note "<Note>"
Gate: At least 3 candidates with at least 1 Tier A + 1 Tier B source each
Output: candidates/<id>.json + source_dossiers/<id>.json for each candidate
Transition: → DISTILL
Phase 3: DISTILL
Goal: Audit candidates, promote, fill profiles with LLM-driven distillation.
Steps:
- For each candidate, run audit:
python3 scripts/expert_distiller.py audit --root <root> --expert-id <id>
- For candidates that pass audit (
promotion_allowed: true), create profile:
python3 scripts/expert_distiller.py profile --root <root> --domain <domain> --expert-id <id> --name "<Name>"
- For each promoted expert, fill the profile by reading source content:
- Read source URLs with the current environment's web fetch/open tool
- Extract career arc, reasoning patterns, critique styles, blind spots
- Write the filled profile to
experts/<id>/profile.json
- Write the distillate markdown to
experts/<id>/distillate.md
- Follow the contract in
references/profile-contract.md
- Rebuild index:
python3 scripts/expert_distiller.py index --root <root>
Gate: At least 2 experts with fully filled profiles
Output: experts/<id>/profile.json + experts/<id>/distillate.md for each promoted expert
Transition: → COUNCIL
Phase 4: COUNCIL
Goal: Form expert council with auto-assigned roles.
Steps:
- Create council:
python3 scripts/expert_distiller.py council create --root <root> --domain <domain> --name "<Domain> Main Council"
# Optional: --experts id1,id2,id3 to specify which experts (default: all)
- Review the auto-assigned roles (chair, reviewer, advocate, skeptic)
- If needed, manually adjust with
council add-member --role <role>
Output: councils/<council_id>.json with members, roles, weights, routing rules
Transition: → SCORE (first pass)
Phase 5: SCORE (First Pass)
Goal: Initial scoring — all axes start at 0 (no artifact exists).
Steps:
- Run score command:
python3 scripts/expert_distiller.py score --root <root> --domain <domain>
- This first pass records baseline 0/100 — everything needs building
Output: scoring_reports/<domain>_<timestamp>.json with total=0
Transition: → BUILD (always needs work on first pass)
Phase 6: BUILD
Goal: Generate project code guided by expert lenses, targeting weakest axes.
Steps:
- Read the scoring report to identify weakest axes
- For each expert in the council, extract build guidance:
reasoning_kernel.core_questions — what they'd ask
reasoning_kernel.preferred_abstractions — what concepts they use
advantage_knowledge_base.anti_patterns — what to avoid
domain_relevance.best_used_for — where they add value
- Generate code that:
- Addresses the specific gaps from the scoring report
- Uses patterns experts would approve
- Avoids anti-patterns experts would flag
- Follows expert testing and quality preferences
- Write code to the target repo path
- Record build context:
python3 scripts/expert_distiller.py build --root <root> --domain <domain> --target-repo <repo_path>
Agent: Use project-builder agent for code generation
Output: Project source code at target repo path
Transition: → DEBUG
Phase 7: DEBUG
Goal: Verification loop — build, types, lint, tests, security, diff.
Steps:
- Build: Run the project's build command. Fix failures.
- Type Check: Run type checker. Fix errors.
- Lint: Run linter. Fix warnings.
- Tests: Run test suite. Fix failures.
- Security: Scan for secrets, injection, OWASP top 10.
- Diff Review: Check for regressions and scope creep.
For each stage failure:
- Max 3 retries per failure type
- Tag failure with impacted scoring axis (see
references/build-integration.md)
- If 3 retries exhausted, feed failure to GAP_FILL
Agent: Use project-builder agent for build failure fixes
Transition:
- All PASS → RESCORE
- Any FAIL (after retries) → GAP_FILL with failure details
Phase 8: RESCORE
Goal: Full 4-axis scoring with council debate protocol.
Steps:
- Run score command against the artifact:
python3 scripts/expert_distiller.py score --root <root> --domain <domain> --artifact <repo_path>
- For each axis, apply expert council debate (see
references/council-protocol.md):
- Each expert scores independently using their reasoning kernel
- Skeptic challenges high scores (>20)
- Advocate affirms low scores (<15)
- Compute weighted median per axis
- Sum axes for total (0-100)
- Update pipeline state with new scores
- Generate report:
python3 scripts/expert_distiller.py report --root <root> --domain <domain> --format markdown
Agent: Use maturity-scorer agent for adversarial scoring
Output: Updated scoring_reports/<domain>_<timestamp>.json
Transition:
- total = 100 + verification all PASS → SUBMIT
- total < 100 → GAP_FILL
- Score regression (>10 point drop) → PAUSE and flag
Phase 9: GAP_FILL
Goal: Analyze gaps, add experts if needed, determine build focus.
Steps:
- Run coverage analysis:
python3 scripts/expert_distiller.py coverage --root <root> --domain <domain>
- Analyze scoring report for specific gaps per axis
- Determine action:
- Update pipeline state history
Agent: Use gap-analyst agent for coverage analysis
Output: gap_analyses/<domain>_<timestamp>.json with recommendations
Transition: → BUILD (next iteration)
Phase 10: SUBMIT
Goal: Submit converged artifact to GitHub.
Steps:
- Run final verification (all 6 stages must PASS)
- Generate final report:
python3 scripts/expert_distiller.py report --root <root> --domain <domain> --format markdown --output MATURITY_REPORT.md
- Create git branch:
council-pilot/<domain_id>
- Commit all changes with format:
feat(council-pilot): <domain> maturity 100/100
Breadth: 25/25 | Depth: 25/25 | Thickness: 25/25 | Effectiveness: 25/25
Expert council: <council_name> (<expert_count> experts)
Iterations: <iteration_count>
- Push branch and create PR:
git push -u origin council-pilot/<domain_id>
gh pr create --title "Expert-Distilled: <domain>" --body-file MATURITY_REPORT.md
- Update pipeline state:
status: submitted
Output: GitHub PR URL
Transition: Terminal (pipeline complete)
Convergence Criteria
The pipeline terminates ONLY when ALL conditions are met:
- Maturity score = 100 (breadth=25, depth=25, thickness=25, effectiveness=25)
- Verification loop: all 6 stages PASS
- No coverage gaps flagged by gap analyst
- Council consensus that artifact is submission-ready
A score of 100 means the expert council cannot find meaningful improvements. This is intentionally hard to achieve.
Loop Parameters
| Parameter | Default | Description |
|---|
--max-iterations | 10 | Maximum BUILD→DEBUG→RESCORE cycles |
--target-repo | current dir | Where to build the project |
--quick | false | Reduce to 2 experts, max 3 iterations |
State Persistence
Pipeline state is stored in <root>/pipeline_state.json:
- Current phase, iteration count, score history
- Target repo, GitHub branch, active council
- Experts added mid-loop (flagged for later review)
- Build failures and score regressions
Each iteration reads state at start, writes at end. Context can be safely compacted between iterations.
Dynamic Expert Addition
The pipeline can add new experts mid-loop:
- Gap analyst identifies uncovered sub-domain
- Expert researcher discovers 1-2 targeted candidates (fast-track)
- Minimum viable sources collected (1 Tier A + 1 Tier B)
- Abbreviated audit → skeleton profile → add to council
- Fast-tracked experts start with weight cap 0.2 (vs 0.3)
- After 2 scoring cycles, fast-track flag is removed
Maximum 2 new experts per iteration. Total council size must not exceed 10.
Failure Recovery
| Failure | Recovery |
|---|
| Max iterations reached | Pause, generate report, print current state |
| Build failure after 3 retries | Log failure, continue to GAP_FILL |
| Score regression (>10 points) | Pause, revert to previous artifact |
| Context window pressure | Write state to disk, compact, resume |
Search Tools
Use whichever web research surface is available in the active agent runtime:
- In Codex, use the built-in web search/open workflow when current public sources are needed.
- In Claude Code, use configured web-search MCP tools if they are installed.
- If no web tool is available, run
discover --from-file with a curated JSON source list and mark the run as source-file assisted.
Safety and Trust
- Require at least one Tier A and one Tier B source before promotion
- Never use Tier C sources to define core beliefs, bio_arc, signature_ideas, critique_style, or quote_bank
- Mark stale or weakly sourced fields as tentative
- Preserve source refs and freshness metadata with every profile
- Downgrade conclusions that rely only on expert memory
- Never fabricate quotes — all quotes must be verbatim or clearly marked as paraphrases with source attribution
- Expert memory is an analysis lens, not primary evidence
Fast Commands (Manual Mode)
All CLI commands work standalone without the autonomous pipeline:
# Initialize
python3 scripts/expert_distiller.py init --root ./forum --domain "My Domain" --topic "Description"
# Add candidate and sources
python3 scripts/expert_distiller.py candidate --root ./forum --domain "my-domain" --name "Expert Name" --reason "Why"
python3 scripts/expert_distiller.py source --root ./forum --expert-id expert-name --tier A --title "Source" --url "https://..." --note "Note"
# Audit, profile, validate
python3 scripts/expert_distiller.py audit --root ./forum --expert-id expert-name
python3 scripts/expert_distiller.py profile --root ./forum --domain "my-domain" --expert-id expert-name --name "Expert Name"
python3 scripts/expert_distiller.py validate --root ./forum --strict
# Council management
python3 scripts/expert_distiller.py council create --root ./forum --domain "my-domain"
python3 scripts/expert_distiller.py council list --root ./forum
python3 scripts/expert_distiller.py council show --root ./forum --council-id my-domain-main
# Scoring and analysis
python3 scripts/expert_distiller.py score --root ./forum --domain "my-domain" --artifact ./project
python3 scripts/expert_distiller.py coverage --root ./forum --domain "my-domain"
python3 scripts/expert_distiller.py report --root ./forum --domain "my-domain" --format markdown
# Discovery and maintenance
python3 scripts/expert_distiller.py discover --root ./forum --domain "my-domain" --from-file candidates.json
python3 scripts/expert_distiller.py refresh --root ./forum --stale-only