Fang: protect your env variables from being stealed.

v1.0.0

Protect environment variables from being stolen by malicious skill scripts. Runs a two-phase security audit: (1) static pattern scan via scan_env.py to detec...

0· 60·0 current·0 all-time
byJay@goog
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (detect env-var theft) aligns with the included scripts: scan_env.py performs a regex-based static scan and fang_audit.py orchestrates static + optional LLM analysis. There is a small documentation/code mismatch: SKILL.md says Phase 2 runs automatically if an LLM key is available in the environment, but the provided fang_audit.py only uses an explicit --llm-key CLI argument (it does not auto-read a named env var). Otherwise the capabilities requested are proportional to the stated purpose.
!
Instruction Scope
Phase 1 is local static scanning of .py/.sh (coherent). Phase 2 collects script contents (up to 3000 chars per file) and sends them to an OpenAI-compatible API using the provided key/base_url. Sending full or truncated source code to a remote LLM can leak secrets, credentials, or sensitive code—this is intentional for deep analysis but is a clear privacy/exfiltration risk the user must accept explicitly. Also note the scanner will only statically scan .py/.sh in Phase 1; Phase 2 covers additional extensions (.js/.ts/.ps1), which is consistent but worth noting.
Install Mechanism
No install spec (instruction-only plus included scripts). Nothing is downloaded or executed during install. This minimal footprint is appropriate for a local audit tool.
Credentials
The skill declares no required environment variables or credentials (correct). The only sensitive input the tool accepts is an optional LLM API key (--llm-key or CLI), which is necessary for the stated LLM deep analysis. That key will be used to call the provided base_url (default api.openai.com). Requiring that key is proportionate to the LLM feature, but it is not required for the static scan.
Persistence & Privilege
always is false and the skill does not modify system-wide settings or other skills. It does not request persistent privileges or self-enable behavior.
Scan Findings in Context
[PATTERN_STRINGS_IN_SCANNER] expected: The scanner includes regex strings for network/encoding/exec (e.g., 'urllib', 'socket', 'os.system', 'subprocess') which, if the scanner were run against its own files, could appear as findings. These pattern strings are intentional for detection logic and are expected.
Assessment
This tool does what it says: run the static scan locally without any external network calls. If you enable the LLM deep analysis by supplying an API key (or base URL), the tool will send snippets of the scanned files to that external LLM — those snippets can contain secrets or sensitive code. Before using the LLM mode: (1) only provide a key for an endpoint you trust (prefer a local/private LLM endpoint), (2) consider running the static-only scan first and reviewing flagged files locally, (3) avoid scanning directories that contain unrelated sensitive files (run it per-skill or on a copy), and (4) be aware the scanner's heuristics can produce false positives (including flagging the scanner itself). If you need proof-of-concept audit without sending data externally, run python scripts/fang_audit.py <target_dir> without --llm-key.

Like a lobster shell, security has layers — review code before you run it.

latestvk971yy5pn0a1z9xgfh3hze2w5h83j4k1

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments