Back to skill
Skillv0.2.1
ClawScan security
Skill Distiller · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignApr 15, 2026, 9:20 AM
- Verdict
- benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's files and runtime instructions match its stated purpose (compressing skills) and do not request unrelated credentials or risky installs, but it does persist local calibration data and references LLM provider environment variables in its runtime docs that are not declared in the registry metadata — review those behaviors before installing.
- Guidance
- This skill appears to do what it says: compress skill documents. Before installing, consider these points: - The skill will read skill markdown files you point it at (expected), and it documents writing calibration data to .learnings/skill-distiller/calibration.jsonl in the host environment. If you prefer no on-disk traces, plan to monitor or clean that path. - The docs show it will prefer a local ollama model or fall back to GEMINI_API_KEY / OPENAI_API_KEY if set. Those provider env vars are not listed in the registry metadata — if you have cloud keys configured, the skill's runtime explanation indicates it may use them for LLM calls (this is normal for LLM-based tools, but verify you are comfortable with the agent using your configured provider). - There is no installer that fetches external code, and no unrelated credentials or network endpoints are embedded in the files — risk from supply-chain downloads is low. - disable-model-invocation is true (the skill is not allowed to autonomously invoke the model), which reduces autonomous behavior risk. The included test script references ollama usage but is only for manual testing. If you want extra caution: review or sandbox the first run, inspect .learnings after usage, and ensure any provider keys you use are scoped appropriately.
Review Dimensions
- Purpose & Capability
- okName and description (skill compression/distillation) align with the provided SKILL.md, reference docs, and test fixtures. Required capabilities (parsing markdown, scoring sections, producing compressed output) are consistent with what is present; no unrelated cloud or system access is demanded in metadata or files.
- Instruction Scope
- noteSKILL.md instructs reading/parsing skill markdown files provided by the user and producing compressed output — expected. It also documents provider detection (ollama, GEMINI_API_KEY, OPENAI_API_KEY), and describes writing calibration data to .learnings/skill-distiller/calibration.jsonl. These actions are within the skill's purpose but do include persistent local writes and runtime checks for local tooling (ollama).
- Install Mechanism
- okThis is an instruction-only skill (no install spec). There are no downloads, package installs, or extract operations. The only executable artifact is a included test_integration.sh for manual testing; it does not run automatically.
- Credentials
- noteRegistry metadata declares no required env vars, but the SKILL.md and README describe probing for ollama and optionally using GEMINI_API_KEY or OPENAI_API_KEY for cloud inference. That is reasonable for an LLM-driven tool, but the skill did not declare those env vars in its registry metadata — users should be aware the runtime will check/use any provider credentials their agent has configured.
- Persistence & Privilege
- noteThe skill documents saving calibration entries to .learnings/skill-distiller/calibration.jsonl (append, with rotation). This is reasonable for calibration but is a persistent write to the host filesystem. always:false and disable-model-invocation:true reduce autonomous risk; the skill does not request system-wide config changes or other skills' credentials.
