Adversarial Code Review
v1.0.0Use when reviewing pull requests or critiquing code changes and you want high-signal, low-noise feedback by running multiple adversarial agents that challeng...
⭐ 0· 72·0 current·0 all-time
by@reikys
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name/description (adversarial code review) match the SKILL.md: it describes a three-agent review pattern, reading diffs and producing filtered high-signal comments. Mentioning the Claude CLI as a dependency is consistent with the described model-driven workflow.
Instruction Scope
Runtime instructions focus on reading PR diffs, PR metadata, and reviewer outputs and running model invocations (via the Claude CLI). There are no instructions to read unrelated system files, exfiltrate data, or call external endpoints beyond the model CLI; this stays within the review scope.
Install Mechanism
This is an instruction-only skill with no install spec or code to write to disk. Risk is low because nothing in the manifest installs arbitrary packages or fetches code.
Credentials
The skill does not declare any required environment variables or credentials, but it depends on a model CLI (Claude) which in practice requires credentials/configuration outside the skill. That is proportionate to the purpose, but users should be aware the model CLI will need access to the repository and model credentials (not declared here).
Persistence & Privilege
always is false and the skill is user-invocable only. It does not request persistent presence or modify other skills or system-wide settings.
Assessment
This skill is internally consistent, but before installing or enabling it: (1) Ensure the Claude (or other model) CLI you plan to use is legitimate and that its API credentials are stored and scoped appropriately — the SKILL.md presumes such credentials but does not declare them. (2) When running in CI, limit the model's access to only the repositories/PRs it needs and avoid exposing secrets in diffs or env vars passed to the model. (3) Review and test the system prompts (they intentionally prime reviewers to assume bugs) to avoid biased false positives for low-risk PRs. (4) If you use a hosted model, consider data residency/privacy implications of sending diffs to the model provider.Like a lobster shell, security has layers — review code before you run it.
latestvk9782fk7mvmk8bbf01q19rw6q983n74y
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
