Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Council v2

v2.0.3

Multi-model council review that spawns 3-5 independent AI reviewers and applies mechanical synthesis — votes decide, not orchestrator opinion. Use when you n...

0· 219·0 current·0 all-time
byDon Zurbrick@zurbrick

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for zurbrick/council-v2.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Council v2" (zurbrick/council-v2) from ClawHub.
Skill page: https://clawhub.ai/zurbrick/council-v2
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install council-v2

ClawHub CLI

Package manager switcher

npx clawhub@latest install council-v2
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name, description, README, role prompts, orchestration script, and synthesizer all align: the skill builds reviewer prompts, collects JSON reviewers, and mechanically synthesizes a vote-driven verdict. It does not request unrelated credentials or system access in its manifests.
!
Instruction Scope
The runtime instructions and scripts will read file contents/stdin and embed the full content in an orchestration prompt (council.sh). That content is intended to be dispatched to external model providers (sessions_spawn) — so any secrets in reviewed files will be sent to models. More importantly, references/synthesis-rules.md gives examples (e.g., 2 approves + 1 reject -> approve) that contradict the implementation in scripts/synthesize.py (which requires ratio > 0.75 to produce an 'approve'). This behavioral mismatch means the tool may produce different outcomes than its documentation promises.
Install Mechanism
No install spec is provided (instruction-only skill with bundled scripts). The included scripts are small, local, and text-based; there is no remote download or archive extraction. README suggests cloning a GitHub repo but the skill package itself already contains code — no automated network installs are required by the provided files.
Credentials
The skill declares no required env vars or credentials. The README recommends using OpenRouter or direct provider config (OPENROUTER_API_KEY example), but the scripts rely on the host/platform's model provider configuration (sessions_spawn) rather than managing keys themselves. This is proportionate, but you must ensure your OpenClaw/host model provider config is correct and that you understand which provider/API keys will be used.
Persistence & Privilege
No 'always: true' or other privileged persistence requested. The skill does not modify other skills or global agent config; scripts operate on local files and stdout. Autonomous invocation is allowed by default (normal for skills) but is not combined with other high-risk flags here.
What to consider before installing
This skill is plausibly what it says, but review these points before installing or using it on real secrets: - Behavior vs docs: The documented synthesis examples (e.g., approve+approve+reject -> approve) conflict with the code in scripts/synthesize.py, which requires an average >0.75 to return 'approve'. Test the synthesizer on representative reviewer JSON to confirm the actual behavior and, if needed, update either the docs or code. - Sensitive data: council.sh constructs an orchestration prompt containing the full content under review and prints it to stdout (or JSON). That content will be forwarded to whatever model providers your OpenClaw installation uses. Do not run reviews on files with secrets, credentials, or private data unless you have explicitly configured safe provider handling and logging controls. - Provider configuration & provenance: The skill itself does not hold API keys; it expects the agent/platform to supply model providers (OpenRouter or direct providers). Make sure your OpenClaw model/provider configuration enforces provider diversity if you want the intended cross-provider council. Also verify the origin of this package (README references a GitHub repo) if provenance matters for your environment. - Sanity checks: Run the synthesizer locally with mock reviewer JSON to validate exit codes and outputs. Review the role prompts to ensure they match your threat model (e.g., ensure 'Security & Risk' actually flags the issues you care about). If you rely on mechanical blocking for security decisions, explicitly test the 'critical' finding flow. If you want, I can produce a short test plan (example reviewer JSON inputs and expected outputs) to validate the synthesizer behavior and expose the docs/code mismatch.

Like a lobster shell, security has layers — review code before you run it.

latestvk975pd66s9tdewph0fhr3x5xm9837mgj
219downloads
0stars
4versions
Updated 22h ago
v2.0.3
MIT-0

Council v2

A hardened OpenClaw skill for multi-model council reviews. It dispatches independent reviewers, collects structured JSON, and applies a mechanical synthesis protocol so the final verdict is driven by votes and critical findings — not orchestrator vibes.

Primary entrypoint: bash skills/council-v2/scripts/council.sh review <type> [file]

When to Use

Use when a single model reviewing its own work is not enough:

  • Code review before merge or deployment
  • Plan review before committing resources
  • Architecture review for important technical decisions
  • Decision review when multiple plausible options exist
  • Security-sensitive or irreversible choices
  • Pre-flight review, adversarial critique, or second-opinion work

When Not to Use

Do not use for:

  • One-line fixes or trivial edits
  • Low-stakes decisions where overhead exceeds risk
  • Purely factual lookups with no judgment call
  • Work already reviewed recently with no material change

Council Shape

Two tiers are supported:

  • Standard — 3 reviewers for routine code, plan, and decision reviews
  • Full — 5 reviewers for high-stakes, security-sensitive, or irreversible choices

Tier selection heuristic

Use Standard when: routine code changes, internal plans, reversible decisions, low blast radius. Use Full when: security-critical, production-facing architecture, irreversible commitments, high cost of being wrong, or when you want maximum coverage.

When in doubt, start Standard. Escalate to Full if the Standard result is split or if critical findings surface that need more perspectives.

Cost note

Full Council runs 5 model calls instead of 3. That is ~1.7x the token cost of Standard. Use Full when the cost of a bad decision exceeds the cost of the extra API calls — which for security, architecture, and irreversible choices, it almost always does.

Detailed role composition and synthesis rules live in:

  • references/review-types.md
  • references/role-prompts.md
  • references/synthesis-rules.md

Review Types

TypeTypical use
codeSource files, scripts, patches, PR diffs
planProposals, project plans, rollout plans
architectureSystems design, infra decisions, workflows
decisionA/B/C choices with tradeoffs

Definitions: references/review-types.md

Quick Start

# Standard code review
bash skills/council-v2/scripts/council.sh review code src/auth.py

# Force full plan review
bash skills/council-v2/scripts/council.sh review plan proposal.md --tier full

# Architecture review from stdin
cat design.md | bash skills/council-v2/scripts/council.sh review architecture --tier full

# Decision review with options
bash skills/council-v2/scripts/council.sh review decision options.md --options "SQLite,Postgres,Cloud SQL"

# Emit orchestration plan as JSON
bash skills/council-v2/scripts/council.sh review code src/auth.py --format json

How It Works

  1. Loads content from file or stdin
  2. Selects Standard or Full tier
  3. Builds reviewer prompts from references/role-prompts.md
  4. Emits an orchestration plan suitable for sessions_spawn
  5. Collects reviewer JSON outputs
  6. Runs python3 scripts/synthesize.py ...
  7. Returns synthesis with mechanical result, minority report, and conditions

Interpreting Results

The synthesizer returns structured JSON and a meaningful exit code:

Exit codeMeaningWhat to do
0Approve — clear majority, no criticalsShip it
1Reject or Blocked — majority rejected or a critical finding blockedAddress the critical findings or rethink the approach
2Approve with conditions — mixed or conditional majorityFix the flagged conditions, then re-review or proceed with documented risk
3Error — invalid input or synthesis failureCheck reviewer JSON for malformed output; see error handling below

Reading the synthesis output

  • mechanical_result: The vote-driven verdict. This is the answer.
  • critical_blocks: Any critical findings that auto-blocked approval. Address these first.
  • conditions: Aggregated recommendations from warning-level findings. These are your fix list.
  • minority_report: The strongest dissent from the majority. Read this even if you agree with the majority — it is often where the best insight lives.
  • anti_consensus_check: Fires on unanimous decisions. Treat the counterargument seriously.

Error Handling

Reviewer returns invalid JSON

synthesize.py validates every reviewer output against required fields. If a reviewer returns malformed JSON, synthesis exits with code 3 and prints an error message.

What to do:

  1. Check the raw reviewer output for the failing model
  2. Re-run that single reviewer (the orchestration plan shows which models to dispatch)
  3. If the model consistently fails, substitute it — see model override flags below

Provider is down or times out

If a provider fails to respond, the review set will be incomplete. Run synthesis on whatever outputs you have — a 2-of-3 Standard review is still useful. Note the missing reviewer in your assessment.

Model override flags

Override any model at the command line:

bash skills/council-v2/scripts/council.sh review code src/auth.py \
 --opus claude-sonnet-4 \
 --gpt gpt-4.1 \
 --grok grok-3

Available flags: --opus, --gpt, --grok, --deepseek, --gemini

Model Diversity

The council's value comes from different providers with different training data and different biases reviewing the same decision. The specific model versions (Opus, GPT-5.4, Grok 4, etc.) matter less than the diversity. Swap in whatever top-tier models you have access to — what matters is that they are not all from the same provider.

Retrospectives

scripts/retro.sh generates a structured retrospective template for reviewing past council decisions against actual outcomes.

# Review the 5 most recent decisions in a directory
bash skills/council-v2/scripts/retro.sh ./council-outputs/ 5

When to run retros

Run monthly, or after any decision where the outcome surprised you. The retro surfaces:

  • Which reviewers provided signal vs. noise
  • Whether critical findings were real or false alarms
  • Whether synthesis preserved minority views accurately
  • Prompt changes to consider for role-prompts.md

Feed retro findings back into references/role-prompts.md to calibrate the council.

Notes

  • Requires bash, python3, and OpenClaw reviewer dispatch capability
  • Model aliases can be overridden — see model override flags above
  • Synthesis rules are documented in references/synthesis-rules.md

References

  • references/review-types.md — review type definitions and tier recommendations
  • references/role-prompts.md — reviewer role prompts and shared output instructions
  • references/schema.md — JSON schemas for reviewer output and synthesis output
  • references/synthesis-rules.md — mechanical synthesis protocol and edge cases

Comments

Loading comments...