Expanso secrets-scan
v1.0.0Detect hardcoded secrets like API keys, tokens, and passwords in text or code using Expanso Edge pipelines.
⭐ 0· 836·0 current·0 all-time
byExpanso@aronchick
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill name and files describe a secrets scanner and the pipeline processors perform exactly that (pattern/LLM-based scanning). Requiring or using an OpenAI API key for LLM-enhanced detection is reasonable for this purpose. However, documentation and metadata are inconsistent: README claims OPENAI_API_KEY is required, while skill.yaml marks it optional even though both CLI and MCP pipelines unconditionally call an openai_chat_completion processor. This mismatch is noteworthy because it affects whether the pipeline will actually send your input off-machine.
Instruction Scope
Pipelines send the entire provided text/code to the remote OpenAI model (openai_chat_completion) as the content to scan. The requested output schema explicitly asks for a "full_match" field (the full matched string) which means the pipeline expects the LLM to return full secret strings. That behavior would cause any found secrets to be included in the pipeline output and — since the LLM receives the full input — to be transmitted to OpenAI. The MCP pipeline exposes an HTTP /scan endpoint that can accept arbitrary text and forward it to the LLM. These instructions stay within the stated purpose (scanning) but entail sending sensitive data off-host and returning full secret values, which is a high-privacy-risk design choice.
Install Mechanism
This is an instruction-only skill with no install spec or code to download. It requires Expanso Edge (local binary) to run pipelines; no third-party downloads or install scripts are included, so installation risk is low.
Credentials
The only credential referenced is OPENAI_API_KEY (skill.yaml marks it optional). Yet both pipeline files use openai_chat_completion and reference ${OPENAI_API_KEY} directly; README states OPENAI_API_KEY is required. That inconsistency could lead to runtime failures or accidental unprotected behavior. Requiring an OpenAI key is proportionate to LLM-based scanning, but you must be explicit that supplying it causes your scanned data to be sent to OpenAI. No unrelated credentials are requested.
Persistence & Privilege
The skill does not request always:true or any persistent/privileged presence. It does not modify other skills or system-level configuration. MCP mode runs an HTTP server only when you start it, which is expected for a service exposing a scan endpoint.
What to consider before installing
This skill is a legitimate-looking secrets scanner but it will (by design) send the text you give it to OpenAI's API when an OPENAI_API_KEY is supplied. That means any secrets in the input may leave your machine and may appear in the tool's output (the pipeline asks for "full_match" values). Before using it: 1) Decide whether you are comfortable sending repository contents or other sensitive data to an external LLM. 2) If you need local-only scanning, verify or implement the regex/local backend and remove or disable the openai_chat_completion processor. 3) Confirm the OPENAI_API_KEY behavior — if you don't set a key, test how the pipeline behaves (it may error). 4) If you must use the LLM, consider changing the output schema to redact full secrets (return only partial redactions) and avoid logging or storing full matches. 5) If deploying MCP mode, restrict access to the /scan endpoint (authentication, network controls) to prevent remote abuse. If you want, I can produce a minimal, local-only variant of the pipeline that never sends data to remote services and returns only redacted matches.Like a lobster shell, security has layers — review code before you run it.
latestvk97d99fcsbamqfsctc85haf63x80wtap
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
