Back to skill
Skillv2.1.2

ClawScan security

Review Agent · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousApr 25, 2026, 12:02 AM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill largely matches a 'review coach' purpose, but several operational and installation choices are inconsistent or high‑privilege (core patching, GitHub fetches, undeclared Feishu/Lark credentials, and prompt-injection patterns), so proceed only after careful review and sandboxed testing.
Guidance
Key things to check before installing or running this skill: - Do not run install/update/patch scripts (install.sh / update.sh / feishu_seed_workspace_patch.py) on production systems until reviewed. These scripts clone code from a third‑party GitHub repo and apply a core patch to OpenClaw; review every line in those scripts and the patch file in a trusted environment. - Verify the upstream repository and the publisher identity. Confirm the GitHub repo (https://github.com/jimmyag2026-prog/review-agent-skill) is trustworthy and inspect commit history, recent changes, and install scripts for unexpected network calls or arbitrary command execution. - Audit scripts for network endpoints and secrets handling. Search the code for any hardcoded URLs, remote upload routines, or uses of open network sockets (e.g., dashboard-server.py), and confirm they point only to expected hosts. - Check for undeclared credential needs. The skill will need Lark/Feishu app scopes and OpenClaw gateway credentials to function; do not supply broad platform tokens blindly. Prefer scoped service accounts and rotate tokens after testing. - Sandbox first. Install and run the skill in an isolated test environment (VM/container) with no access to production secrets or tenant tokens, and simulate a minimal session to observe behavior, file writes, and outbound connections. - Review persona/profile files and templates. Since persona files are injected into LLM system prompts, inspect agent_persona.md, boss_profile.md and other templates for any instructions that could force the model to leak data or override system-level safeguards. - Disable automatic updates until you trust the source. Do not enable any auto-update behavior; prefer to pull updates manually after review. - If you must proceed in production, minimize scope: restrict Lark/Feishu app scopes to the least privilege needed, do not apply the core patch until reviewed, and backup OpenClaw core before applying any modifications. If you want, I can: (1) list specific files/lines to inspect next (install.sh, update.sh, feishu_seed_workspace_patch.py, scripts that call network), (2) scan the included Python scripts for obvious I/O/network calls to external hosts, or (3) produce a short sandbox test plan you can run safely.
Findings
[system-prompt-override] expected: The skill imports persona/profile files and uses them to construct system prompts for per-peer subagents; that use of system‑prompt material is expected for an agent that enforces a responder persona. However it is also a recognized prompt-injection pattern: these persona files will be fed into LLM system prompts, so malicious or poorly-sanitized content in those files could alter agent behavior. Treat persona/profile files as sensitive inputs and review them carefully.

Review Dimensions

Purpose & Capability
noteThe described functionality (per-peer review subagents, file-based sessions, Python scripts) aligns with the 'review coach' claim. However the SKILL and POST_INSTALL ask the admin to patch the OpenClaw core (feishu_seed_workspace_patch.py) and to grant Lark/Feishu app scopes — capabilities that are not declared in the registry metadata (no required env vars). Requesting a platform core patch is a heavy, privileged action that should be justified and audited.
Instruction Scope
concernRuntime instructions/read/write many local files under ~/.openclaw and per-session folders; they also load persona/profile documents into LLM system prompts. The SKILL.md explicitly instructs running multiple scripts that will read boss_profile.md, review_rules.md, sessions/* files, and may access gateway/token config via openclaw. Admin docs instruct running external install/update scripts and applying a core patch. The skill's instructions therefore go beyond lightweight in‑agent behavior and include system-level modifications and network fetches — scope creep from a simple ‘review/coach’ capability.
Install Mechanism
concernThe registry lists no formal install spec, but POST_INSTALL and README instruct admins to git clone a GitHub repo and run install.sh/update.sh which fetch code and apply a core patch. update.sh and install.sh will pull remote code (github.com/jimmyag2026-prog/...), and a supplied patch modifies openclaw core files. Fetching and running remote install/patch scripts is higher risk and should be audited before execution.
Credentials
concernThe metadata claims no required env vars, but documentation and delivery/backends clearly expect platform credentials and tools: OpenClaw feishu gateway config (tenant access tokens), Lark/Feishu app scopes, optional send_mail/Gmail SMTP, and optional ~/bin/lark_send or ~/bin/send_mail helpers. Those credentials are functionally necessary for the skill to reach Responder/Requester or to push docs yet are not declared in requires.env. This mismatch is an incoherence and increases risk of unexpected access to sensitive tokens.
Persistence & Privilege
concernThe skill asks admins to apply a persistent patch to the openclaw core to alter workspace seeding behavior and includes update/uninstall scripts that can change system state. While 'always: false', the core patch and admin helpers give the skill global effect on the platform (modifying how dynamic agent creation seeds personas). That is a substantial privilege and should not be applied without review; it also increases blast radius if the skill is later updated from upstream.