Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Macarena Test
v0.1.0Security audit and threat model for OpenClaw gateway hosts. Use to verify OpenClaw configuration, exposure, skills/plugins, filesystem hygiene, and to produce an OK/VULNERABLE report with evidence and fixes.
⭐ 0· 1.2k·0 current·0 all-time
by@misirov
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The declared goal—auditing an OpenClaw gateway—matches the requested actions (run a collection script, read a verified bundle and many reference files). There are no unrelated environment variables or external installs, so capabilities are generally aligned with the stated purpose.
Instruction Scope
The SKILL.md requires immediate execution of scripts/collect_verified.sh 'immediately (no consent prompt)' and mandates reading many local reference/config files. Running an arbitrary local script without an explicit user consent or an upfront inspection is intrusive and can access sensitive files. Although the skill forbids exfiltration, the instructions do not provide safeguards to prevent the agent from reading or leaking secrets present in the collected data.
Install Mechanism
Instruction-only skill with no install spec and no code files. This minimizes supply-chain risk because nothing is downloaded or written to disk by the installer.
Credentials
The skill declares no required environment variables or credentials, which seems reasonable. However, its runtime instructions require reading configuration, state files, and a collected bundle that may contain secrets or credential material. There is a mismatch between 'no creds required' and the broad filesystem/config access the audit implicitly requests—this increases risk if those files contain sensitive data.
Persistence & Privilege
The skill is model-invocable by default (disableModelInvocation not set) and not flagged always:false; combined with the SKILL.md instruction to run a local script immediately, this creates a risk that the model could autonomously execute local collection without explicit, auditable user consent. The skill should require explicit user confirmation before running any local executable and consider disabling autonomous invocation for high-sensitivity actions.
What to consider before installing
This skill looks like a legitimate audit tool, but it instructs the agent to execute a local collection script immediately and to read many local files. Before installing or running it: 1) Inspect scripts/collect_verified.sh and all references/* files yourself — do not run them until you review their contents. 2) Run the collection script in a safe environment (non-root account, container, or isolated VM) and backup any important data. 3) Require explicit user consent before the agent runs any local scripts; consider setting disableModelInvocation or requiring the user to invoke the skill manually. 4) If you proceed, verify that verified-bundle.json and reference files come from a trusted source and redact or exclude any secrets. 5) Prefer running the audit offline and only share redacted evidence after manual review.Like a lobster shell, security has layers — review code before you run it.
latestvk97amqagcvpebpk47c6tnh934180j1sf
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
