Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

SAFE-Fuzzer

v1.0.0

Sandbox-only behavior-led gray-box skill fuzzer. Spawns a worker subagent, probes an installed target skill, deploys honeypot fixtures, and returns a structu...

0· 49·0 current·0 all-time
byagentsey@archidoge0
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name and description match the runtime instructions: orchestration, worker-spawn, honeypot fixture deployment, and structured reporting. No credentials, binaries, or install steps are declared, which is coherent for an instruction-only orchestrator that expects a sandbox image to provide runtime tools. README lists recommended container tooling (node, npm, python3, git, curl, jq) but the skill metadata does not declare required binaries — this is a minor documentation mismatch but not a functional incoherence.
Instruction Scope
SKILL.md explicitly limits behavior to a locked sandbox, forbids reading host env/config paths, and requires preflight checks. It authorizes limited gray-box reads of target SKILL.md and ./skills/<target>/** to improve probe planning — this is reasonable for a gray-box fuzzer but could expose target-local secrets if those exist; the instructions also mandate synthetic secrets only and honeypot fixtures. Overall the scope stays within the stated fuzzer purpose, but operators should be aware that allowed 'limited reads' of target files can surface sensitive data from the target skill's workspace.
Install Mechanism
No install spec (instruction-only) — lowest-risk class. The skill expects the runtime sandbox to provide requisite binaries/images but does not attempt to download or install code itself. README notes external SAFE project references, but there is no remote installer or extraction step in the skill bundle.
Credentials
The skill declares no required environment variables or credentials (proportional). However, preset/report examples and 'tripwire_focus' mention items like 'OPENAI_API_KEY' as bait/tripwire artifacts; combined with the instruction forbidding reading host env vars, this creates potential ambiguity about whether the worker should probe for environment secrets. The SKILL.md explicitly forbids reading host environment variables and specific host config files, which is appropriate — operators should confirm how tripwire detection is implemented (fixture-based synthetic secrets vs reading real env).
Persistence & Privilege
Does not request always:true or any elevated persistent presence. The skill's model invocation is disabled for the parent, and it uses sessions_spawn/sessions_send to create worker sessions for probe execution — this is consistent with an orchestration-only role. No instructions modify other skills' configs or claim system-wide changes.
Assessment
This skill appears to be what it claims: a sandbox-only fuzzer that spawns worker subagents and runs probes inside a locked sandbox. Before installing/running: (1) ensure you run it only in a fully isolated sandbox with agents.defaults.sandbox.mode: "all" as the SKILL.md requires; (2) confirm that tripwire/fixture semantics are synthetic (the skill claims 'synthetic_secrets_only') so the fuzzer will not look for or leak your real secrets; (3) accept that the fuzzer is allowed to read target-owned files for gray-box planning — if your installed target skill contains sensitive keys/configs, move them out of the test workspace first; (4) note the README lists recommended tooling (node, python, curl, jq) although the skill metadata doesn't declare required binaries — ensure your sandbox image provides those if you expect the worker to run tooling. If you need a stricter guarantee that no environment variables or host configs will ever be accessed, request explicit confirmation from the skill author or run a short controlled test on a disposable target first.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bwtz03fpx1vcbh8mae21jdn83vkjx

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments