Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
clawdstrike-test
v1.0.0Security audit and threat model for OpenClaw gateway hosts. Use to verify OpenClaw configuration, exposure, skills/plugins, filesystem hygiene, and to produce an OK/VULNERABLE report with evidence and fixes.
⭐ 0· 1.6k·2 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description, SKILL.md, reference docs, and included scripts all align with a local OpenClaw security audit and threat-modeling workflow. Required resources (no external credentials, no unrelated binaries) are proportional to the stated purpose.
Instruction Scope
Instructions require running scripts/collect_verified.sh and reading the resulting verified-bundle.json and many local config/state paths — this is appropriate for an audit. However the SKILL.md explicitly mandates executing the collection script "immediately (no consent prompt)", which grants the agent broad discretion to run local commands right away; that's operationally sensitive even if the intent is benign.
Install Mechanism
There is no external install step or remote download; the package includes shell and Python scripts that will be executed locally. That reduces supply-chain risk, but executing bundled scripts still executes code delivered with the skill and should be inspected before running.
Credentials
The skill doesn't request extra environment variables or remote credentials. It legitimately reads local config and state directories and runs system inspection commands (uname, ss/netstat, find, stat, openclaw CLI, firewall tools). Those accesses are necessary for the audit, but they will touch sensitive files (config, credentials, sessions), so the scope is sensitive but proportionate.
Persistence & Privilege
always=false (good), but disable-model-invocation is false (default) and SKILL.md instructs immediate execution with no consent prompt. That combination means an agent could autonomously run the collector and read local sensitive material without an explicit user approval step. Also the script writes verified-bundle.json to disk (may include redacted excerpts); ensure you control when/where that happens.
What to consider before installing
This skill appears to implement the advertised OpenClaw audit, but take these precautions before installing or running it:
- Review the bundled scripts (scripts/collect_verified.sh, scripts/config_summary.py, scripts/redact_helpers.sh) yourself. They perform many local commands and will write verified-bundle.json to the working directory.
- Do not allow the agent to run the skill autonomously until you are ready. Either disable autonomous invocation for this skill or ensure the agent prompts you for explicit consent before running the collection script.
- Run the collection script manually in an isolated environment (or on a test host) first to confirm outputs and redaction behavior before letting the agent run it. The redaction regexes are helpful but not guaranteed to catch every secret format.
- Inspect verified-bundle.json before sharing or publishing it; verify sensitive values are correctly redacted and remove files you do not want retained.
- If you plan to run a "deep" probe, only do so after confirming the specific additional commands the script will run and accepting the risk.
If you want a safer workflow: run scripts/collect_verified.sh yourself and then invoke the skill with the produced verified-bundle.json as input (so the agent never runs collection commands on your host).Like a lobster shell, security has layers — review code before you run it.
latestvk976ag2rg1j2sz51tdr85zmh7x80jzpb
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
