Back to skill
Skillv2.0.1
ClawScan security
Deepsafe Scan · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousApr 2, 2026, 8:13 AM
- Verdict
- suspicious
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill appears to implement a coherent preflight scanner, but it also auto-detects and uses local LLM credentials and will modify other agent configuration (openclaw.json) — behaviour that is surprising and intrusive for a user-installed skill.
- Guidance
- This scanner is largely consistent with its stated purpose, but exercise caution before running it on your real environment: - Backup any agent/gateway config (e.g., ~/.openclaw/openclaw.json) before running. The tool contains code that will modify that file to enable a chatCompletions endpoint. - If you do not want any external LLM access (and to avoid sending sensitive data to third-party APIs), run with --no-llm or do not expose ANTHROPIC_API_KEY / OPENAI_API_KEY / gateway tokens to the environment. - Review the code (scripts/llm_client.py, scripts/scan.py, and probe files) yourself if possible — the probes contain deliberate prompt-injection and persuasion templates used to test models. - Run scans in an isolated or disposable environment (not on production machines) and avoid running as root; the skill will read many sensitive local files (credentials, logs, sessions). - If you want only static analysis, use the --no-llm flag and ensure the tool cannot access your API keys or the OpenClaw gateway token. Given the tool's capability to modify other agent configs and to use detected API credentials automatically, only install or run it after you are comfortable with those behaviors.
- Findings
[system-prompt-override] expected: Prompt injection / system-prompt override patterns appear in data/prompts.json and the probes. This is expected because the skill implements model-behavior probes that intentionally test for such vulnerabilities; however these same patterns are dangerous if present in scanned artifacts or used accidentally against production models.
Review Dimensions
- Purpose & Capability
- noteName/description match the delivered artifacts: Python scripts implement posture/skill/memory/hooks scans and model probes. Requiring python3 and shipping static analyzers, probe templates, and an LLM client is proportionate to the stated functionality. One mismatch: the skill auto-enables a gateway chatCompletions endpoint by editing ~/.openclaw/openclaw.json (scripts/llm_client.py), which is beyond a passive scanner's expected read-only behavior.
- Instruction Scope
- concernSKILL.md instructs scanning sensitive local areas (agents/, credentials/, ~/.openclaw, logs, workspace), and the code will auto-detect and use ANTHROPIC_API_KEY / OPENAI_API_KEY or an OpenClaw gateway token. That data may be sent to external LLM endpoints during model probes. The skill also presents itself as able to 'help fix issues' and the llm_client contains logic that writes to openclaw.json to enable an endpoint — this expands scope from read/scan to modification and potential configuration changes.
- Install Mechanism
- okInstall spec is lightweight: a brew package for python3 only. No remote archive downloads or npm/pip installs. The install mechanism is proportionate.
- Credentials
- noteThe skill declares no required env vars but the runtime auto-detects and will read ANTHROPIC_API_KEY, OPENAI_API_KEY, and OpenClaw gateway token (and potentially OPENCLAW_GATEWAY_TOKEN / OPENAI_BASE_URL). These are expected for the model-probe features, but you should be aware the skill will use any detected keys without an explicit requirement prompt. If you don't want keys used, SKILL.md shows a --no-llm flag to avoid LLM calls.
- Persistence & Privilege
- concernscripts/llm_client.py contains _ensure_chat_completions_enabled which modifies the user's ~/.openclaw/openclaw.json to enable a gateway endpoint. This is a write to another tool's configuration and qualifies as modifying other agent/system settings—an intrusive privilege. The skill is not always:true, but it requests the ability to modify external config files at runtime.
