Back to skill
Skillv1.2.0

ClawScan security

RLM Controller · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignFeb 14, 2026, 5:59 PM
Verdict
benign
Confidence
high
Model
gpt-5-mini
Summary
The skill's bundled scripts, tests, and documentation are consistent with its stated RLM long‑context controller purpose; required artifacts and safeguards (path validation, regex timeouts, redaction, hard limits) line up with the description.
Guidance
This skill appears internally consistent and implements the safeguards it documents (path containment, regex timeouts, secret redaction, hard caps on slices/subcalls). Before installing: 1) Review the few truncated/omitted files (particularly any toolcall emission or spawn code) to confirm tool names are hard-coded and no network calls or dynamic exec of model output are present. 2) If you operate in a high-security environment, set disableModelInvocation: true so the agent cannot autonomously spawn batches without your approval. 3) Run the bundled tests locally to validate behavior in your environment (note: SIGALRM-based regex timeouts are Unix-specific). 4) Confirm cleanup.sh points only at a workspace scratch path you control and adjust CLEAN_ROOT/ignore rules if needed. If you cannot review the omitted files, treat the skill as 'suspicious' until a full code review is completed.
Findings
[instruction_scope_missing_enforcement] expected: The OpenClaw scanner flagged that SKILL.md referenced exec and sessions_spawn but did not show enforcement of safelists. This is a reasonable scanner finding; the repository now includes path validation, input checks, regex timeouts, and redaction. Reviewers should still inspect emission/spawn code (some files were truncated in the provided listing) to confirm enforcement is implemented end-to-end.
[autonomous_invocation_privilege] expected: The scanner noted the skill allows autonomous invocation (disableModelInvocation not set). This is an expected design choice for a batch-oriented RLM controller; it is documented as a trade-off. It is not a disqualifying issue by itself, but operators should consider enabling explicit confirmation in high-security environments.

Review Dimensions

Purpose & Capability
okName/description describe a long-context controller and the repository actually contains scripts and docs implementing that behavior (context store, peek/search/chunk, planning, spawn manifest, redaction, cleanup). No unexpected environment variables, binaries, or installers are requested. The presence of test files and policy/docs matches the claimed purpose.
Instruction Scope
noteSKILL.md instructs the agent to call only bundled helper scripts and OpenClaw tools (read, write, exec, sessions_spawn). Many scripts were provided and they contain explicit safeguards: shared path validation (rejects '..' and enforces realpath containment), regex search timeout to mitigate ReDoS, secret redaction prior to writing subcall prompts, and limits on slices/subcalls. However a subset of files were omitted from the pasted source (12 files truncated). The docs and an included audit response assert that rlm_emit_toolcalls and related emission code enforce safelists; those enforcement claims are plausible given the shown tests and modules, but full verification requires reviewing the omitted files (notably any file that emits tool names or invokes exec).
Install Mechanism
okNo install spec (instruction-only skill) and all helper scripts are bundled. This is the lowest-risk install model for skills because no external downloads or extract operations occur at install time.
Credentials
okThe skill declares no required environment variables, no primary credential, and no required config paths. The redaction logic explicitly targets common secret patterns (PEM blocks, bearer/basic tokens, AWS keys, passwords, long hex strings). Asking for no secrets is proportional to the stated functionality.
Persistence & Privilege
noteThe skill does not set always:true and does not request persistent system privileges. It does allow autonomous model invocation by default (disableModelInvocation not set), which is a documented trade-off: useful for large batch runs but increases the range of autonomous operations. Hard limits (max recursion depth 1, max subcalls/slices/batches) and platform constraints (sub-agents cannot spawn sub-agents) reduce the blast radius. Operators with stricter threat models are advised to set disableModelInvocation: true.