Research Review Skill Factory
PassAudited by VirusTotal on May 4, 2026.
Overview
Type: OpenClaw Skill Name: research-review-skill-factory Version: 1.0.1 The skill bundle is a meta-tool designed to automate the creation of specialized research-area reviewer skills by synthesizing public data from OpenReview. The included Python scripts (fetch_openreview_field_evidence.py and init_research_area_review_skill.py) use only standard libraries, communicate exclusively with the legitimate api2.openreview.net endpoint, and implement safety checks such as path-traversal validation when creating new skill directories. No evidence of malicious intent, data exfiltration, or harmful prompt injection was found; the functionality is transparent and aligns with the stated purpose.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The skill may make network requests to OpenReview and create local evidence files while building a reviewer skill.
The skill instructs the agent/user to run a Python helper that contacts OpenReview and writes output files. This is disclosed and central to the research-evidence workflow, but it is still active tool use that users should run in an appropriate workspace.
python scripts/fetch_openreview_field_evidence.py --field "<query>" --years <Y1> <Y2> <Y3> --output "<evidence-dir>/<query-slug>"
Run the commands in a project workspace, choose a non-sensitive output directory, and review generated evidence files before packaging or sharing the child skill.
A user could accidentally rely on an unreviewed validator if one exists elsewhere in the environment.
The workflow references a validation script that is not included in the supplied file manifest. This is not evidence of malicious behavior, but if followed literally the agent may run a local or external helper outside the reviewed artifact set.
Run `quick_validate.py` on the child skill.
Use only a known trusted validation script, or replace this step with an explicitly reviewed validation command.
Future reviews could be influenced by stale, biased, or poorly summarized OpenReview precedent.
The generated child skill persists a summarized OpenReview-derived evidence bank that future reviewer agents will read. The artifacts include appropriate cautionary language, but external/public evidence can still be biased, incomplete, or misleading if not curated.
retrieve the local area review-response bank before writing review comments; Treat OpenReview evidence as precedent, not as law.
Curate the evidence bank, keep citations to forum URLs, avoid copying raw reviews, and refresh or label limited evidence when coverage is sparse.
