Back to skill
Skillv1.0.0
ClawScan security
Reflect Critique Revise · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignApr 21, 2026, 1:52 PM
- Verdict
- benign
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill is internally coherent for a code-review helper: it sends code to an LLM endpoint and returns revised code, but it will transmit whatever draft you provide to the configured LLM endpoint so you should ensure that endpoint is trusted and the small registry / metadata mismatches are intentional.
- Guidance
- This skill will transmit your provided code and task text to whatever LLM endpoint is configured (OPENCLAW_LLM_ENDPOINT, defaulting to http://localhost:8080). Before installing: 1) Verify the endpoint is trusted (prefer local or vetted cloud endpoints); sensitive or proprietary code should not be sent to an untrusted remote service. 2) Ensure python3 and aiohttp are installed in the runtime environment. 3) Note the registry metadata inconsistency (the top-level 'Requirements' said none, yet SKILL.md and the script expect OPENCLAW_LLM_ENDPOINT and aiohttp); ask the publisher to clarify if needed. 4) Be aware triggers may auto-run after code generation—if you don't want automatic reviews, control invocation or remove triggers. 5) The included code references a model name string ("m27-jangtq-crack") in requests; this is just a parameter sent to your configured endpoint but may warrant extra attention to confirm intended backend/model. If you cannot verify the endpoint or publisher, run the script in a sandbox or inspect/modify it so it posts only to a trusted LLM.
Review Dimensions
- Purpose & Capability
- okThe skill's name/description (multi-pass code critique + revision) matches its code and SKILL.md: it calls an LLM, runs critique/revise/confidence prompts, and produces revised code. The SKILL.md declares python3, aiohttp, and OPENCLAW_LLM_ENDPOINT which align with the implementation. Note: the registry summary at the top of the provided package (the 'Requirements' block) listed no required env vars or binaries, which is inconsistent with the SKILL.md and the included Python implementation that uses aiohttp and reads OPENCLAW_LLM_ENDPOINT.
- Instruction Scope
- noteThe runtime instructions and implementation explicitly send the entire draft code and task text to a configured LLM endpoint. That is expected for a code-review skill, but it means any sensitive code provided will be transmitted to that endpoint. The skill does not attempt to read unrelated system files or other environment variables beyond OPENCLAW_LLM_ENDPOINT, nor does it exfiltrate to additional endpoints in the code.
- Install Mechanism
- noteThere is no external download/install step (instruction-only + bundled Python file). The Python script depends on aiohttp and python3; SKILL.md lists aiohttp in python_packages. Because there is no formal install spec in the registry, deployment will require a runtime that can install or satisfy that dependency. No arbitrary URL downloads or archive extraction are present.
- Credentials
- okThe only environment variable used is OPENCLAW_LLM_ENDPOINT to determine which LLM to call (with a localhost default). No unrelated credential or secret variables are requested. This is proportionate to the skill's purpose, but supplying a remote endpoint delegates trust to that endpoint.
- Persistence & Privilege
- okThe skill does not request always:true, does not modify other skills or system-wide settings, and runs only when invoked (or when triggers fire). Autonomous invocation is allowed (default) but not combined with elevated privileges in this package.
