Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

claude-review

Self-review quality gate using Claude CLI. When the user says 'review your work', 'use review-work', or 'check your output', run review-work with the task su...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 152 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's stated purpose (run an independent Claude-based review) matches the included script. However the SKILL.md repeatedly asserts the agent should 'determine all arguments yourself — the user does NOT need to specify them', while the shipped review-work.sh requires an explicit task summary and --context path. The SKILL.md also references a required Claude API key but the registry metadata does not declare any required env var or credential — a mismatch between claimed needs and declared requirements.
!
Instruction Scope
The runtime instructions and script ask Claude to read all files at the provided path (and optionally a skill SKILL.md and LESSONS.md). That is coherent for a reviewer, but the SKILL.md also contains a system-level reviewer prompt which is appended to the model invocation and a pre-scan flagged 'system-prompt-override' pattern was detected. The script uses claude with --tools 'Read,Glob,Grep' and instructs the model to 'Read ALL files', which can expose arbitrary user files under the provided path; combined with the appended system prompt and the --dangerously-skip-permissions flag (used in the script), this elevates the risk that the reviewer will access sensitive data if the context path is broad or mis-specified.
Install Mechanism
No install spec; this is an instruction-only skill plus a single shell script. Nothing is downloaded or written by an installer. Risk from install mechanism itself is low.
!
Credentials
The skill requires a working Claude CLI with a valid API key, and the SKILL.md documents LESSONS_FILE override via LESSONS_FILE env var and optionally SKILLS_DIR. Yet the registry metadata lists no required environment variables or primary credential. The need for a Claude API key is not declared in the metadata, so the skill is under-declared and could mislead users about credential requirements.
Persistence & Privilege
always:false (good). The script writes to a LESSONS.md in the user's home workspace (default ~/.openclaw/workspace/LESSONS.md) when reviews fail — persistent storage of review failures is intentional for the feature. This is not an escalation of platform privileges, but it does create persistent files in the user's home and may aggregate review results; users should confirm they are comfortable with that path and its contents.
Scan Findings in Context
[system-prompt-override] expected: The skill appends a system prompt to the Claude invocation to instruct the reviewer. Appending a system prompt is expected for controlling a reviewer, but the pattern is a recognized prompt-injection indicator and combined with --dangerously-skip-permissions it increases risk; review the prompt text and the CLI flags carefully.
What to consider before installing
This skill is basically a wrapper that calls your local 'claude' CLI to perform a file-based review and then optionally appends failed items to a LESSONS.md in your home workspace. Before installing or enabling it: 1) Confirm you have the claude CLI and a Claude API key, and understand where that key is stored (the skill metadata does not declare it). 2) Inspect the script (review-work.sh) yourself — it uses --dangerously-skip-permissions and asks the model to read ALL files under the provided context path, so avoid passing broad paths (like ~ or /) that could expose unrelated files. 3) Be aware it will create/append to LESSONS.md by default at ~/.openclaw/workspace/LESSONS.md (or the path in LESSONS_FILE); if you don't want persistent logs, set LESSONS_FILE to a location you control or remove the auto-log block. 4) The SKILL.md claims the agent will auto-determine arguments, but the script requires explicit task/context; confirm how your agent integration will populate those args. 5) If you plan to use this in production or with sensitive data, test it in a sandbox and consider removing or modifying the --dangerously-skip-permissions flag or tightening the allowed tool usage before trusting it with private files.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.1
Download zip
latestvk973d8ad2prmgqdjw7nfw748hx82q876

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

claude-review — Self-Review Quality Gate

Uses Claude CLI (claude --print) as an independent reviewer to catch errors, missed requirements, and quality issues in your work before delivering to the user.

How It Works

  1. You complete your task and save output to file(s)
  2. review-work sends your work to a separate Claude instance for independent review
  3. If a skill was used, the reviewer checks against the skill's specific requirements
  4. If LESSONS.md exists, the reviewer checks for repeat mistakes
  5. Issues are returned with severity ratings (critical / major / minor) and a PASS/FAIL verdict
  6. You fix issues and re-review until clean

The reviewer is a separate Claude instance — it has no context of your conversation, so it reviews purely on merit.

Auto-learning: When a review fails, critical and major issues are automatically logged to LESSONS.md. This file is auto-included in future reviews so the reviewer checks for repeat mistakes.

Prerequisites

  • claude CLI must be installed and available in PATH (npm install -g @anthropic-ai/claude-code)
  • Valid API key configured for Claude CLI

Command

review-work "<task_summary>" --context <file_or_folder> [--skill <file_or_folder>]
ArgumentRequiredDescription
task_summaryYesWhat the work was supposed to accomplish
--context <path>YesFile or folder containing the work to review. Can also include reference material, test output, or anything relevant.
--skill <path>NoSKILL.md or skill folder used for this task. The reviewer uses its requirements as a definition of done.

Auto-included (no flag needed):

  • LESSONS.md — if it exists, always included so the reviewer checks for repeat mistakes

All paths accept both files and folders. Claude reads all file types natively (text, images, PDFs, code).

Workflow

When instructed to review your work:

  1. Identify every file you created or modified
  2. Run review-work with the task summary, --context pointing to your output, and --skill if a skill was used
  3. Read the review output — look for VERDICT: PASS or FAIL
  4. Fix any critical or major issues
  5. Re-run review-work after fixing (up to 3 cycles)
  6. Report the review summary in your final output

Examples

Review a single file:

review-work "Write a Python email validator" --context /tmp/email.py

Review with skill context (reviewer verifies against skill requirements):

review-work "Write an SEO blog about class action lawsuits" --context /tmp/blog.md --skill ~/.openclaw/workspace/skills/seo-content-writer/SKILL.md

Review an entire project folder:

review-work "Build a todo app with React" --context /tmp/todo-app/ --skill ~/skills/fullstack/SKILL.md

Review with extra context (reference articles, test output, etc.):

# Put your output + reference material in one folder
review-work "Write a blog matching MoneyPilot tone" --context /tmp/blog-project/

Rules

  1. Review every file you created or modified — not just the main one
  2. If a skill was used for the task, always pass --skill
  3. If the review reports critical or major issues → fix them → re-review (up to 3 cycles)
  4. Only finish after the verdict is PASS (zero critical/major issues)
  5. Include the review summary in your final output
  6. After 3 failed cycles, finish but attach the full review report

What NOT to Do

  • Do NOT ask the user for arguments — you already know what you created and which skill you used
  • Do NOT say "review passed" without actually running the command
  • Do NOT fabricate review results — the command produces real output
  • Do NOT forget --skill when a skill was involved in the task

LESSONS.md

Failed reviews are auto-logged to LESSONS.md (default: ~/.openclaw/workspace/LESSONS.md). Override the path with the LESSONS_FILE environment variable.

This file is also auto-read on every review, so the reviewer checks: "are any past mistakes being repeated?"

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…