Fang: protect your env variables from being stealed.

v1.0.0

Protect environment variables from being stolen by malicious skill scripts. Runs a two-phase security audit: (1) static pattern scan via scan_env.py to detec...

0· 117·0 current·0 all-time
byJay@goog

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for goog/fang.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Fang: protect your env variables from being stealed." (goog/fang) from ClawHub.
Skill page: https://clawhub.ai/goog/fang
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install fang

ClawHub CLI

Package manager switcher

npx clawhub@latest install fang
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (detect env-var theft) aligns with the included scripts: scan_env.py performs a regex-based static scan and fang_audit.py orchestrates static + optional LLM analysis. There is a small documentation/code mismatch: SKILL.md says Phase 2 runs automatically if an LLM key is available in the environment, but the provided fang_audit.py only uses an explicit --llm-key CLI argument (it does not auto-read a named env var). Otherwise the capabilities requested are proportional to the stated purpose.
!
Instruction Scope
Phase 1 is local static scanning of .py/.sh (coherent). Phase 2 collects script contents (up to 3000 chars per file) and sends them to an OpenAI-compatible API using the provided key/base_url. Sending full or truncated source code to a remote LLM can leak secrets, credentials, or sensitive code—this is intentional for deep analysis but is a clear privacy/exfiltration risk the user must accept explicitly. Also note the scanner will only statically scan .py/.sh in Phase 1; Phase 2 covers additional extensions (.js/.ts/.ps1), which is consistent but worth noting.
Install Mechanism
No install spec (instruction-only plus included scripts). Nothing is downloaded or executed during install. This minimal footprint is appropriate for a local audit tool.
Credentials
The skill declares no required environment variables or credentials (correct). The only sensitive input the tool accepts is an optional LLM API key (--llm-key or CLI), which is necessary for the stated LLM deep analysis. That key will be used to call the provided base_url (default api.openai.com). Requiring that key is proportionate to the LLM feature, but it is not required for the static scan.
Persistence & Privilege
always is false and the skill does not modify system-wide settings or other skills. It does not request persistent privileges or self-enable behavior.
Scan Findings in Context
[PATTERN_STRINGS_IN_SCANNER] expected: The scanner includes regex strings for network/encoding/exec (e.g., 'urllib', 'socket', 'os.system', 'subprocess') which, if the scanner were run against its own files, could appear as findings. These pattern strings are intentional for detection logic and are expected.
Assessment
This tool does what it says: run the static scan locally without any external network calls. If you enable the LLM deep analysis by supplying an API key (or base URL), the tool will send snippets of the scanned files to that external LLM — those snippets can contain secrets or sensitive code. Before using the LLM mode: (1) only provide a key for an endpoint you trust (prefer a local/private LLM endpoint), (2) consider running the static-only scan first and reviewing flagged files locally, (3) avoid scanning directories that contain unrelated sensitive files (run it per-skill or on a copy), and (4) be aware the scanner's heuristics can produce false positives (including flagging the scanner itself). If you need proof-of-concept audit without sending data externally, run python scripts/fang_audit.py <target_dir> without --llm-key.

Like a lobster shell, security has layers — review code before you run it.

latestvk971yy5pn0a1z9xgfh3hze2w5h83j4k1
117downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

FANG — ENV Guard

Two-phase audit tool to detect environment variable theft in skill scripts.

Scripts

ScriptPurpose
scripts/fang_audit.pyMain audit runner — static scan + LLM deep analysis
scripts/scan_env.pyStatic pattern scanner (env / network / encode / exec)

Phase 1 — Static Scan

Uses scan_env.py regex rules across .py and .sh files.

Risk scoring:

FlagPoints
env access+2
network call+3
base64 / encode+2
exec / subprocess+2

Score ≥ 6 → HIGH · ≥ 3 → MEDIUM · > 0 → LOW · 0 → CLEAN

Phase 2 — LLM Deep Analysis (optional)

Reads all .py .sh .js .ts .ps1 .bash scripts in the target directory and sends them to an OpenAI-compatible LLM. The LLM checks for:

  • Env reads combined with outbound HTTP/socket/DNS
  • Obfuscation: base64, hex, eval, dynamic imports
  • Hardcoded exfiltration endpoints
  • Suspicious subprocess chains

Usage

Basic static scan only

python scripts/fang_audit.py <target_dir>

With LLM deep analysis

python scripts/fang_audit.py <target_dir> --llm-key sk-... --model gpt-4o-mini

OpenAI-compatible API (e.g. local Ollama / DeepSeek)

python scripts/fang_audit.py <target_dir> \
  --llm-key any \
  --model deepseek-chat \
  --base-url https://api.deepseek.com/v1

Save report to file

python scripts/fang_audit.py <target_dir> --llm-key sk-... --output report.txt

Scan all workspace skills at once

python scripts/fang_audit.py C:/Users/dad/.openclaw/workspace/skills

Agent Workflow

When the user asks to audit skills for env theft:

  1. Ask for the target directory (default: workspace skills/ folder)
  2. Run Phase 1 static scan — report summary immediately
  3. If HIGH or MEDIUM risks found, ask whether to run LLM deep analysis
  4. If --llm-key is available (from env or user), run Phase 2 automatically
  5. Present the final threat report:
    • List each risky file with risk level + reason
    • Highlight any CRITICAL combined patterns (env read + network send)
    • Recommend action: QUARANTINE (HIGH), REVIEW (MEDIUM), MONITOR (LOW)

Risk Response Guide

Risk LevelRecommended Action
🔴 HIGHImmediately quarantine the skill, do not run it
🟡 MEDIUMManual code review before use
🟢 LOWMonitor; likely benign but worth noting
✅ CLEANSafe to use

Notes

  • The LLM analysis truncates each file to 3000 chars to stay within token limits.
  • For very large skill directories, consider scanning one skill at a time.
  • scan_env.py only processes .py and .sh files; fang_audit.py LLM mode also covers .js, .ts, .ps1.

Comments

Loading comments...