Back to skill
Skillv0.4.3
ClawScan security
Coverify · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignMar 14, 2026, 4:29 PM
- Verdict
- Benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's code and runtime instructions are coherent with its stated purpose (extracting commitment tokens, Jaccard comparison, ghost-token reporting and a model-swap harness); it requests no credentials or external installs and does not contact remote endpoints, though there are some implementation rough edges you should review before trusting results.
- Guidance
- This skill appears internally consistent and self-contained: it performs local text analysis, requires no credentials, and writes outputs under ~/.openclaw. Before installing or running it: 1) review/fix the truncated code in scripts/commitment_verify.py (the file ends mid-assignment and will likely crash the ghost command); 2) be aware the tool prints and saves extracted kernel sentences and reports (these can contain sensitive text) to ~/.openclaw — do not run it on sensitive secrets unless you accept local persistence; 3) model_swap_test.py expects either a provided JSON extract or a file and suggests recording patterns in a pattern_registry which is not included — evaluate whether you need that registry or to add it; 4) test the scripts in a sandboxed environment to confirm behavior and output locations before integrating them into any governance automation. There are no remote endpoints or secret requests in the package.
Review Dimensions
- Purpose & Capability
- okName/description match the included scripts and SKILL.md: the Python scripts implement extraction, Jaccard comparison, ghost-token reporting, and a model-swap test harness. There are no unrelated environment variables, binaries, or external credentials requested.
- Instruction Scope
- noteSKILL.md instructs the agent to run the included Python scripts and to use a local audit ledger at ~/.openclaw/audits/moses/audit_ledger.jsonl; that is consistent with a verification tool. However, the included commitment_verify.py source appears truncated near the end (an unfinished assignment to hashlib.sha2) which will likely produce a runtime error for the ghost command as shipped. model_swap_test.py references pattern_registry.py (for recording ghost patterns) that is not included. Also note the tool prints/stores extracted kernels and may write results and saved reports under ~/.openclaw, so inputs (or kernel excerpts) may be persisted in plaintext.
- Install Mechanism
- okThere is no install spec (instruction-only with bundled scripts); nothing is downloaded from the network or installed automatically. This is lower risk. The SKILL.md suggests 'clawhub install coverify' but the package contains only local scripts (the platform's install step is not present here).
- Credentials
- okThe skill declares no required environment variables, no credentials, and no config paths beyond per-user directories under the home (~/.openclaw/*). Those file accesses are proportional to an audit/ledger-style verification tool. No network endpoints, tokens, or secrets are requested.
- Persistence & Privilege
- okThe skill does create and read files under the user's home directory (audit ledger, model-swap results), which is expected for an audit tool. It does not request elevated privileges, does not set always:true, and does not modify other skills' configs. Autonomous invocation is allowed by default (platform behavior) but not combined with any broad credential access here.
