Back to skill
Skillv1.0.0

ClawScan security

Expanso secrets-scan · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousFeb 11, 2026, 9:43 AM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill generally matches a secrets-scanning purpose, but there are inconsistencies and privacy risks (unconditional sending of scanned text to a remote LLM and a mismatch about whether an OpenAI API key is required) that you should understand before installing or running it on sensitive data.
Guidance
This skill is a legitimate-looking secrets scanner but it will (by design) send the text you give it to OpenAI's API when an OPENAI_API_KEY is supplied. That means any secrets in the input may leave your machine and may appear in the tool's output (the pipeline asks for "full_match" values). Before using it: 1) Decide whether you are comfortable sending repository contents or other sensitive data to an external LLM. 2) If you need local-only scanning, verify or implement the regex/local backend and remove or disable the openai_chat_completion processor. 3) Confirm the OPENAI_API_KEY behavior — if you don't set a key, test how the pipeline behaves (it may error). 4) If you must use the LLM, consider changing the output schema to redact full secrets (return only partial redactions) and avoid logging or storing full matches. 5) If deploying MCP mode, restrict access to the /scan endpoint (authentication, network controls) to prevent remote abuse. If you want, I can produce a minimal, local-only variant of the pipeline that never sends data to remote services and returns only redacted matches.

Review Dimensions

Purpose & Capability
noteThe skill name and files describe a secrets scanner and the pipeline processors perform exactly that (pattern/LLM-based scanning). Requiring or using an OpenAI API key for LLM-enhanced detection is reasonable for this purpose. However, documentation and metadata are inconsistent: README claims OPENAI_API_KEY is required, while skill.yaml marks it optional even though both CLI and MCP pipelines unconditionally call an openai_chat_completion processor. This mismatch is noteworthy because it affects whether the pipeline will actually send your input off-machine.
Instruction Scope
concernPipelines send the entire provided text/code to the remote OpenAI model (openai_chat_completion) as the content to scan. The requested output schema explicitly asks for a "full_match" field (the full matched string) which means the pipeline expects the LLM to return full secret strings. That behavior would cause any found secrets to be included in the pipeline output and — since the LLM receives the full input — to be transmitted to OpenAI. The MCP pipeline exposes an HTTP /scan endpoint that can accept arbitrary text and forward it to the LLM. These instructions stay within the stated purpose (scanning) but entail sending sensitive data off-host and returning full secret values, which is a high-privacy-risk design choice.
Install Mechanism
okThis is an instruction-only skill with no install spec or code to download. It requires Expanso Edge (local binary) to run pipelines; no third-party downloads or install scripts are included, so installation risk is low.
Credentials
concernThe only credential referenced is OPENAI_API_KEY (skill.yaml marks it optional). Yet both pipeline files use openai_chat_completion and reference ${OPENAI_API_KEY} directly; README states OPENAI_API_KEY is required. That inconsistency could lead to runtime failures or accidental unprotected behavior. Requiring an OpenAI key is proportionate to LLM-based scanning, but you must be explicit that supplying it causes your scanned data to be sent to OpenAI. No unrelated credentials are requested.
Persistence & Privilege
okThe skill does not request always:true or any persistent/privileged presence. It does not modify other skills or system-level configuration. MCP mode runs an HTTP server only when you start it, which is expected for a service exposing a scan endpoint.