Multi Model Critique

PassAudited by ClawScan on May 1, 2026.

Overview

This skill appears purpose-aligned and benign, but it may send your complex request to multiple model agents and optionally save prompt/run-plan files locally.

This skill is reasonable to use for complex, high-value questions if you are comfortable with multiple selected model agents seeing the prompt and intermediate answers. For sensitive work, limit the models used, redact private data, set budget/time controls, and clean up any generated local files.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A single user request may be processed by several model agents, which can increase cost, latency, and the number of systems handling the prompt.

Why it was flagged

The skill directs the agent to spawn and coordinate multiple ACP sessions. This is central to the advertised multi-model workflow and is bounded by the complex=true trigger and four-round process, but it affects runtime, cost, and delegated execution.

Skill content
Spawn one ACP session per model with the same task and constraints. ... Use `sessions_spawn` with `runtime:"acp"` and explicit `agentId`.
Recommendation

Use this skill only when the multi-model workflow is desired, specify trusted/allowed agent IDs, and set budget or timeout controls for cost-sensitive tasks.

What this means

Sensitive information in the original question or generated answers may be visible to multiple selected model agents during the critique process.

Why it was flagged

The workflow intentionally passes model outputs between separate ACP sessions for cross-critique. This is purpose-aligned, but it means user content and generated drafts cross agent/session boundaries.

Skill content
For each model session, send a critique prompt with:
- its own draft (`SELF_DRAFT`)
- peer drafts (`PEER_DRAFTS`)
Recommendation

Avoid including highly sensitive data unless the selected ACP agents are trusted for that data; prefer redaction or a narrower model list when privacy matters.

What this means

If the helper script is used, parts of the prompt may remain in local files after the session.

Why it was flagged

The helper script stores the user question and constraints in a local run-plan JSON file. This is disclosed and user-directed through the --out path, but it can persist prompt content on disk.

Skill content
"question": question,
"constraints": constraints,
...
out_path.write_text(json.dumps(plan, ensure_ascii=False, indent=2), encoding="utf-8")
Recommendation

Choose an appropriate output location, avoid storing sensitive prompts unnecessarily, and delete generated prompt/run-plan files when no longer needed.