Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Council v2
v2.0.3Multi-model council review that spawns 3-5 independent AI reviewers and applies mechanical synthesis — votes decide, not orchestrator opinion. Use when you n...
⭐ 0· 126·0 current·0 all-time
byDon Zurbrick@zurbrick
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name, description, README, role prompts, orchestration script, and synthesizer all align: the skill builds reviewer prompts, collects JSON reviewers, and mechanically synthesizes a vote-driven verdict. It does not request unrelated credentials or system access in its manifests.
Instruction Scope
The runtime instructions and scripts will read file contents/stdin and embed the full content in an orchestration prompt (council.sh). That content is intended to be dispatched to external model providers (sessions_spawn) — so any secrets in reviewed files will be sent to models. More importantly, references/synthesis-rules.md gives examples (e.g., 2 approves + 1 reject -> approve) that contradict the implementation in scripts/synthesize.py (which requires ratio > 0.75 to produce an 'approve'). This behavioral mismatch means the tool may produce different outcomes than its documentation promises.
Install Mechanism
No install spec is provided (instruction-only skill with bundled scripts). The included scripts are small, local, and text-based; there is no remote download or archive extraction. README suggests cloning a GitHub repo but the skill package itself already contains code — no automated network installs are required by the provided files.
Credentials
The skill declares no required env vars or credentials. The README recommends using OpenRouter or direct provider config (OPENROUTER_API_KEY example), but the scripts rely on the host/platform's model provider configuration (sessions_spawn) rather than managing keys themselves. This is proportionate, but you must ensure your OpenClaw/host model provider config is correct and that you understand which provider/API keys will be used.
Persistence & Privilege
No 'always: true' or other privileged persistence requested. The skill does not modify other skills or global agent config; scripts operate on local files and stdout. Autonomous invocation is allowed by default (normal for skills) but is not combined with other high-risk flags here.
What to consider before installing
This skill is plausibly what it says, but review these points before installing or using it on real secrets:
- Behavior vs docs: The documented synthesis examples (e.g., approve+approve+reject -> approve) conflict with the code in scripts/synthesize.py, which requires an average >0.75 to return 'approve'. Test the synthesizer on representative reviewer JSON to confirm the actual behavior and, if needed, update either the docs or code.
- Sensitive data: council.sh constructs an orchestration prompt containing the full content under review and prints it to stdout (or JSON). That content will be forwarded to whatever model providers your OpenClaw installation uses. Do not run reviews on files with secrets, credentials, or private data unless you have explicitly configured safe provider handling and logging controls.
- Provider configuration & provenance: The skill itself does not hold API keys; it expects the agent/platform to supply model providers (OpenRouter or direct providers). Make sure your OpenClaw model/provider configuration enforces provider diversity if you want the intended cross-provider council. Also verify the origin of this package (README references a GitHub repo) if provenance matters for your environment.
- Sanity checks: Run the synthesizer locally with mock reviewer JSON to validate exit codes and outputs. Review the role prompts to ensure they match your threat model (e.g., ensure 'Security & Risk' actually flags the issues you care about). If you rely on mechanical blocking for security decisions, explicitly test the 'critical' finding flow.
If you want, I can produce a short test plan (example reviewer JSON inputs and expected outputs) to validate the synthesizer behavior and expose the docs/code mismatch.Like a lobster shell, security has layers — review code before you run it.
architecturevk971x5mv1km296x761ysjx3tp5835y5dcouncilvk9775rtp781m9pk6wzg5h1ky9d834h7glatestvk975pd66s9tdewph0fhr3x5xm9837mgjmulti-modelvk9775rtp781m9pk6wzg5h1ky9d834h7greviewvk9775rtp781m9pk6wzg5h1ky9d834h7gsecurityvk971x5mv1km296x761ysjx3tp5835y5d
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
