Back to skill
Skillv2.0.0

ClawScan security

Critical Debater Suite · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignMar 13, 2026, 2:14 AM
Verdict
Benign
Confidence
high
Model
gpt-5-mini
Summary
The skill's files, runtime instructions, and required resources are coherent with a multi-agent debate/orchestration tool; nothing requested is disproportionate to its stated purpose, though it requires internet access and external agent runtimes and includes open-ended LLM judgment steps and an optional cron job which you should review before running.
Guidance
This skill appears to be what it says: a debate orchestration suite that searches the web, runs LLM-based verification, and writes structured reports to a workspace. Before installing/running it, consider: (1) run it in an isolated test workspace or sandbox because it will fetch web content and write files; (2) review the local CLIs it will call — the orchestrator looks for 'claude' and 'codex' on PATH and will run them via subprocess; ensure those CLIs are trustworthy or absent if you don't want outbound model calls; (3) the skill requires internet access for searches and fetching—expect data (topic text, prompts, snippets) to leave the host to whatever runtime you use; (4) inspect scripts you rely on (especially validate-json.sh and any omitted files) and the optional cron creation before enabling scheduled refreshes; (5) be aware LLM judgments (freshness, credibility, social-media handling) are intentionally semantic and open-ended — audit outputs and the audit trail to confirm behavior matches your expectations; (6) note a minor metadata inconsistency: registry lists source/homepage as unknown/none while SKILL.md contains a GitHub homepage — if provenance matters, verify upstream source before production use.

Review Dimensions

Purpose & Capability
okName/description (multi-agent adversarial debate) matches the code and SKILL.md: it reads/writes a debate workspace, ingests web sources, runs judge/audits, and synthesizes reports. It does not request unrelated credentials or system-wide config. It does require internet access and local agent CLIs (claude/codex) which are reasonable for its design.
Instruction Scope
noteSKILL.md and capability modules explicitly instruct web search/fetch, spawn agent roles, read/write many workspace files (evidence, claims, rounds, reports), and run included scripts. These actions are appropriate for orchestration. Two things to note: (1) LLM-driven judgments (freshness, credibility tiers, verification) are intentionally open-ended and give the agent broad semantic discretion (could produce inconsistent results or over-broad searches); (2) Phase 4 optionally creates a 6-hour cron job if the user agrees — that is persistent system activity and should be approved by the operator.
Install Mechanism
okNo install spec / no network download at install time; skill is instruction-plus-included scripts. Included scripts are small and straightforward (init, hash, validate, append-audit) and the Python orchestrator runs them. There are no remote installs or archive downloads embedded in the skill package.
Credentials
okThe skill does not request environment variables, credentials, or access to unrelated config paths. It does expect availability of bash, jq, python3, shasum and network access and will attempt to call local CLIs named 'claude' or 'codex' if present—this is consistent with its purpose but means those CLIs' behavior matters for privacy/security.
Persistence & Privilege
notealways:false and disable-model-invocation:false (normal). The orchestrator writes files inside its workspace and can (only with user agreement per SKILL.md) create a scheduled 6-hour cron to refresh evidence — that is the only persistence beyond workspace files. The skill does not attempt to modify other skills or global agent configs.