Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Adversarial Review

v1.0.0

Run a structured adversarial multi-agent review loop on any significant document. Spawns parallel Opus reviewers with different critical lenses, collects str...

0· 66·1 current·1 all-time
byScott Jensen@scott3j
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description match the included files: reviewer persona templates, review-type bundles, and shell/node helper scripts for session init, synthesis, and copying output. However, SKILL.md instructs spawning reviewers using specific high-capability models (e.g., anthropic/claude-opus-4-6) while the skill declares no required environment variables or credentials for model/API access. That may be fine if the hosting platform provides model access, but it is an implicit requirement that isn't documented. Also the skill includes runtime scripts that expect a node runtime (synthesize.sh creates and runs a temporary Node script) but the skill declares no required binaries; this is a clear undeclared dependency.
Instruction Scope
Instructions are detailed and constrained to the review workflow: create a session dir under ~/.openclaw/workspace/reviews, copy the input doc there, spawn reviewers (via sessions_spawn with explicit model/params), write per-reviewer redlines, synthesize combined results, record positions, and produce a v2. This is consistent with the stated purpose. Two things to note: (1) the skill instructs agents to self-trigger the Complexity Self-Assessment whenever producing substantial documents — that grants the skill broad, frequent usage if the host agent honors it; (2) the workflow reads and writes user documents into a home-directory workspace (~/.openclaw), so it will store local copies of any reviewed documents.
!
Install Mechanism
There is no install spec (instruction-only), which is low-risk normally. However, synthesize.sh dynamically writes and executes a Node.js script (calls node). The skill does not declare 'node' (or npm) as a required binary. If node is not present on the host, synthesis will fail. The lack of declared runtime/binary requirements is an inconsistency that could lead to runtime errors or surprising behavior; the skill also writes temporary files under /tmp and persistent files under the user's home directory (expected for a session store).
Credentials
The skill requests no environment variables, no credentials, and no config paths beyond creating and using ~/.openclaw/workspace/reviews. It does not attempt to read or exfiltrate other system credentials. The lack of any requested API keys is consistent if the platform supplies model invocation capability; if not, model spawning steps may fail silently or require additional platform-level credentials.
Persistence & Privilege
always:false and disable-model-invocation:false (defaults) — the skill is not forced into every agent run, but SKILL.md explicitly urges SELF-TRIGGERING behavior (instructs the agent to run the complexity self-assessment whenever producing substantial documents). That is a behavioral scope request (not a platform-level always:true), and it could lead to frequent automatic usage if the agent honors it. The skill creates and persists session data under ~/.openclaw/workspace/reviews which is expected for its function.
What to consider before installing
This skill appears to be what it says (a structured review workflow) but there are a few things to check before installing or using it widely: - Missing runtime dependency: synthesize.sh writes and runs a temporary Node.js script and requires the node binary, but the skill does not declare node as a required binary. Ensure node is available on the host or modify the script to use an available runtime. - Model invocation assumption: SKILL.md expects the agent to spawn reviewers with named models (e.g., anthropic/claude-opus-4-6). Confirm your platform provides access to those models or that you are comfortable with the platform's model invocation behavior; otherwise reviewer spawning will fail or behave differently. - Local storage: the skill will copy reviewed documents into ~/.openclaw/workspace/reviews and persist reviewer outputs, positions, and final v2 documents. If you handle sensitive documents, consider storage location, encryption, or cleanup policies. - Self-triggering behavior: the skill instructs agents to run a self-assessment automatically when producing substantial documents. Decide whether you want that behavior enabled by default — if you don't, avoid loading/activating the skill persistently or ensure the agent's skill-eligibility rules prevent automatic runs. If you accept these conditions, the skill is functionally coherent. If you need to trust it with highly sensitive documents, review and, if necessary, modify the scripts (or change the session path) and confirm model access/permissions first.

Like a lobster shell, security has layers — review code before you run it.

latestvk97b8j9t6vmb4as8yea5mvrzas83k42h

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments