Research Review Skill Factory
PassAudited by ClawScan on May 4, 2026.
Overview
This skill coherently builds research-review skills from public OpenReview data, with disclosed Python commands and scoped local file generation, and no artifact-backed malicious behavior.
This appears safe to install for its stated purpose. Before using it, be aware that it can run Python scripts, contact OpenReview, write local evidence and child-skill files, and create a persistent review-response bank. Review generated outputs before packaging or publishing them, and verify any validation command that is not included with the skill.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The skill may make network requests to OpenReview and create local evidence files while building a reviewer skill.
The skill instructs the agent/user to run a Python helper that contacts OpenReview and writes output files. This is disclosed and central to the research-evidence workflow, but it is still active tool use that users should run in an appropriate workspace.
python scripts/fetch_openreview_field_evidence.py --field "<query>" --years <Y1> <Y2> <Y3> --output "<evidence-dir>/<query-slug>"
Run the commands in a project workspace, choose a non-sensitive output directory, and review generated evidence files before packaging or sharing the child skill.
A user could accidentally rely on an unreviewed validator if one exists elsewhere in the environment.
The workflow references a validation script that is not included in the supplied file manifest. This is not evidence of malicious behavior, but if followed literally the agent may run a local or external helper outside the reviewed artifact set.
Run `quick_validate.py` on the child skill.
Use only a known trusted validation script, or replace this step with an explicitly reviewed validation command.
Future reviews could be influenced by stale, biased, or poorly summarized OpenReview precedent.
The generated child skill persists a summarized OpenReview-derived evidence bank that future reviewer agents will read. The artifacts include appropriate cautionary language, but external/public evidence can still be biased, incomplete, or misleading if not curated.
retrieve the local area review-response bank before writing review comments; Treat OpenReview evidence as precedent, not as law.
Curate the evidence bank, keep citations to forum URLs, avoid copying raw reviews, and refresh or label limited evidence when coverage is sparse.
