Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Review Agent

v2.1.2

Pre-meeting review coach for Lark/Feishu (or WeCom). Invoked when a Requester DMs their dedicated review-agent subagent with a draft, proposal, plan, or 1:1...

0· 107·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for yinghaojia/review-agent.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Review Agent" (yinghaojia/review-agent) from ClawHub.
Skill page: https://clawhub.ai/yinghaojia/review-agent
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install review-agent

ClawHub CLI

Package manager switcher

npx clawhub@latest install review-agent
Security Scan
Capability signals
CryptoRequires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The described functionality (per-peer review subagents, file-based sessions, Python scripts) aligns with the 'review coach' claim. However the SKILL and POST_INSTALL ask the admin to patch the OpenClaw core (feishu_seed_workspace_patch.py) and to grant Lark/Feishu app scopes — capabilities that are not declared in the registry metadata (no required env vars). Requesting a platform core patch is a heavy, privileged action that should be justified and audited.
!
Instruction Scope
Runtime instructions/read/write many local files under ~/.openclaw and per-session folders; they also load persona/profile documents into LLM system prompts. The SKILL.md explicitly instructs running multiple scripts that will read boss_profile.md, review_rules.md, sessions/* files, and may access gateway/token config via openclaw. Admin docs instruct running external install/update scripts and applying a core patch. The skill's instructions therefore go beyond lightweight in‑agent behavior and include system-level modifications and network fetches — scope creep from a simple ‘review/coach’ capability.
!
Install Mechanism
The registry lists no formal install spec, but POST_INSTALL and README instruct admins to git clone a GitHub repo and run install.sh/update.sh which fetch code and apply a core patch. update.sh and install.sh will pull remote code (github.com/jimmyag2026-prog/...), and a supplied patch modifies openclaw core files. Fetching and running remote install/patch scripts is higher risk and should be audited before execution.
!
Credentials
The metadata claims no required env vars, but documentation and delivery/backends clearly expect platform credentials and tools: OpenClaw feishu gateway config (tenant access tokens), Lark/Feishu app scopes, optional send_mail/Gmail SMTP, and optional ~/bin/lark_send or ~/bin/send_mail helpers. Those credentials are functionally necessary for the skill to reach Responder/Requester or to push docs yet are not declared in requires.env. This mismatch is an incoherence and increases risk of unexpected access to sensitive tokens.
!
Persistence & Privilege
The skill asks admins to apply a persistent patch to the openclaw core to alter workspace seeding behavior and includes update/uninstall scripts that can change system state. While 'always: false', the core patch and admin helpers give the skill global effect on the platform (modifying how dynamic agent creation seeds personas). That is a substantial privilege and should not be applied without review; it also increases blast radius if the skill is later updated from upstream.
Scan Findings in Context
[system-prompt-override] expected: The skill imports persona/profile files and uses them to construct system prompts for per-peer subagents; that use of system‑prompt material is expected for an agent that enforces a responder persona. However it is also a recognized prompt-injection pattern: these persona files will be fed into LLM system prompts, so malicious or poorly-sanitized content in those files could alter agent behavior. Treat persona/profile files as sensitive inputs and review them carefully.
What to consider before installing
Key things to check before installing or running this skill: - Do not run install/update/patch scripts (install.sh / update.sh / feishu_seed_workspace_patch.py) on production systems until reviewed. These scripts clone code from a third‑party GitHub repo and apply a core patch to OpenClaw; review every line in those scripts and the patch file in a trusted environment. - Verify the upstream repository and the publisher identity. Confirm the GitHub repo (https://github.com/jimmyag2026-prog/review-agent-skill) is trustworthy and inspect commit history, recent changes, and install scripts for unexpected network calls or arbitrary command execution. - Audit scripts for network endpoints and secrets handling. Search the code for any hardcoded URLs, remote upload routines, or uses of open network sockets (e.g., dashboard-server.py), and confirm they point only to expected hosts. - Check for undeclared credential needs. The skill will need Lark/Feishu app scopes and OpenClaw gateway credentials to function; do not supply broad platform tokens blindly. Prefer scoped service accounts and rotate tokens after testing. - Sandbox first. Install and run the skill in an isolated test environment (VM/container) with no access to production secrets or tenant tokens, and simulate a minimal session to observe behavior, file writes, and outbound connections. - Review persona/profile files and templates. Since persona files are injected into LLM system prompts, inspect agent_persona.md, boss_profile.md and other templates for any instructions that could force the model to leak data or override system-level safeguards. - Disable automatic updates until you trust the source. Do not enable any auto-update behavior; prefer to pull updates manually after review. - If you must proceed in production, minimize scope: restrict Lark/Feishu app scopes to the least privilege needed, do not apply the core patch until reviewed, and backup OpenClaw core before applying any modifications. If you want, I can: (1) list specific files/lines to inspect next (install.sh, update.sh, feishu_seed_workspace_patch.py, scripts that call network), (2) scan the included Python scripts for obvious I/O/network calls to external hosts, or (3) produce a short sandbox test plan you can run safely.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📋 Clawdis
OSmacOS · Linux
Binspython3
cswvk97cdtmcrmrx1cc6c4gqc6b53x85gqnpfeishuvk97cdtmcrmrx1cc6c4gqc6b53x85gqnphotfixvk97cdtmcrmrx1cc6c4gqc6b53x85gqnplarkvk97cdtmcrmrx1cc6c4gqc6b53x85gqnplatestvk97cdtmcrmrx1cc6c4gqc6b53x85gqnplinux-compatvk97cdtmcrmrx1cc6c4gqc6b53x85gqnpreviewvk97cdtmcrmrx1cc6c4gqc6b53x85gqnpwecomvk97cdtmcrmrx1cc6c4gqc6b53x85gqnp
107downloads
0stars
5versions
Updated 3d ago
v2.1.2
MIT-0
macOS, Linux

review-agent · openclaw skill

You are the review-agent skill inside a per-peer subagent workspace. The subagent's SOUL.md + AGENTS.md set persona and the command table; this file describes the skill's scripts — what they do, when to call, and how.

When to invoke this skill

Invoke when any of:

  • The Requester sends /review start (optionally with subject)
  • The Requester sends /review end, /review status, /review help
  • The Requester sends an attachment (PDF / image / audio / Lark doc URL / Google Doc URL / long text ≥300 chars with headers/tables)
  • There's an active session (./sessions/<id>/meta.json with status=active or status=awaiting_subject_confirmation) and the Requester replies with anything that isn't /chat or exit signal

Scripts (all run from the peer workspace cwd)

ScriptWhenReturns on stdoutSide effects
scripts/ingest.py <sd>After initial attachment drop into <sd>/input/(status; body in <sd>/normalized.md)writes normalized.md; on tool-missing → ingest_failed.json + exit 3
scripts/confirm-topic.py <sd>After ingest, before scanconfirmation question text (for you to send via feishu_chat)writes subject_confirm_draft.md
scripts/scan.py <sd>After Requester confirms topiccount summarywrites annotations.jsonl, cursor.json
scripts/qa-step.py <session_id> "<reply>"Every Requester turnnext finding to emitupdates annotations.jsonl, cursor.json, dissent.md
scripts/merge-draft.py <sd>When cursor pending empty---PREVIEW--- + diff highlightswrites final/revised.md, final/revised_changelog.md
scripts/final-gate.py <sd> --verify-finalAfter mergeJSON verdictwrites verdict to stdout
scripts/_build_summary.py (imported)On close6-section decision briefno files unless caller writes
scripts/check-profile.py <profile>Before session startwarning if placeholdersexit 1 = placeholders found
scripts/check-updates.pyOn demandupdate-available linecaches to ~/.openclaw/review-agent/.update-check.json

Happy path (new review from scratch)

  1. Requester sends proposal.pdf to subagent via Lark DM
  2. You (subagent) save the PDF to ./sessions/<timestamp-slug>/input/proposal.pdf and seed ./sessions/<id>/meta.json
  3. python3 ~/.openclaw/skills/review-agent/scripts/ingest.py ./sessions/<id>/
    • If exit 3 → relay ingest_failed.json.lark_message to Lark, stop, mark session ingest_failed
  4. python3 ~/.openclaw/skills/review-agent/scripts/confirm-topic.py ./sessions/<id>/
    • Pipe stdout → feishu_chat.send (Requester reads it)
  5. When Requester confirms: python3 ~/.openclaw/skills/review-agent/scripts/scan.py ./sessions/<id>/
  6. Read cursor.json.current_id, emit the finding's issue text via feishu_chat
  7. Requester replies → python3 ~/.openclaw/skills/review-agent/scripts/qa-step.py <session_id> "<reply>" → its stdout is the next message for Requester
  8. Loop step 7 until cursor.pending is empty
  9. merge-draft.pyfinal-gate.py --verify-final
  10. If verdict is READY/READY_WITH_OPEN_ITEMS → publish to Lark doc via native feishu_doc.create + feishu_drive.share; send 6-section summary to both parties via feishu_chat; set meta.status=closed

What you MUST NOT do

  • Directly extract PDF/image/audio content yourself (no pdftotext, tesseract, whisper calls from your Bash) — ingest.py owns that
  • Compose the revised brief yourself — merge-draft.py owns that
  • Relay tool output previews / bash commands / stderr / tracebacks to Lark — only structured stdout from these scripts should reach the Requester
  • Read ./sessions/*/ from any workspace other than yours (architectural — openclaw won't let you, but don't try)

References

See references/:

  • agent_persona.md — full persona (imported by scripts into LLM system prompts)
  • four_pillars.md — pillar definitions
  • annotation_schema.md — finding JSON schema
  • summary_template.md — 6-section brief format
  • template/ — default admin_style.md, review_rules.md, boss_profile.md (used by install)

Admin tools (human runs from CLI — NOT invoked by subagent)

These live at the skill root so they travel with distributions. Subagents do NOT call them and they're not listed in AGENTS.md of peer workspaces.

  • update.sh — fetch latest skill from GitHub and re-install. Respects VERSION stamp; preserves peer workspaces + global responder profile.
  • uninstall.sh — remove skill + template. With --purge, also removes global config + per-peer workspaces. With --revert-config, unsets the openclaw.json knobs this skill introduced.

Self-check the installed version any time:

cat ~/.openclaw/skills/review-agent/VERSION
bash ~/.openclaw/skills/review-agent/update.sh --check

Comments

Loading comments...