Back to skill
Skillv1.0.0
ClawScan security
Huizai Context Optimizer · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignFeb 22, 2026, 10:11 AM
- Verdict
- benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's files, instructions, and requirements are coherent with a context-compression/evaluation tool and do not request disproportionate privileges or unexplained resources.
- Guidance
- This package appears to be a documentation + evaluation helper for context compression and is internally consistent. Things to consider before installing or running it: 1) The included Python evaluator has stubs for LLM judge calls and defaults to a model name (gpt-5.2) — if you wire it to a real LLM you will need to supply API credentials at runtime; that is expected but keep credentials scoped/minimized. 2) The probe extraction uses regex-based heuristics (simple patterns for errors, file names, decisions) which can misclassify or miss things; treat results as heuristic, not authoritative. 3) There are no hidden network endpoints or credential requests in the skill itself, but if you adapt the evaluator to call external APIs, review where those network calls go and which credentials they use. Overall the skill is coherent with its stated purpose.
Review Dimensions
- Purpose & Capability
- okName, description, SKILL.md, and included files all focus on context compression and probe-based evaluation. The included Python evaluator and evaluation-framework docs are consistent with the stated goal; no unrelated binaries, credentials, or config paths are requested.
- Instruction Scope
- okSKILL.md instructs when and how to compress conversation context and outlines probe/evaluation approaches. It does not instruct the agent to read arbitrary system files, harvest environment variables, or send data to unknown endpoints. The runtime instructions and examples stay within the stated purpose.
- Install Mechanism
- okThere is no install spec (instruction-only with a helper script). Nothing is downloaded or written by an installer; the single Python file is included in the package. This is proportionate for a documentation + evaluation helper skill.
- Credentials
- okThe skill declares no required environment variables, credentials, or config paths. The included code references using an LLM judge (default model 'gpt-5.2') but does not embed API keys or external endpoints; that is reasonable for an evaluation tool. If you run the evaluator in production, you will need to provide appropriate model/API credentials at runtime — which is expected.
- Persistence & Privilege
- okThe skill does not request always:true and is user-invocable only. It doesn't attempt to modify other skills or system-wide settings. Autonomous model invocation remains the platform default and is not abused here.
