Chaos Lab
PassAudited by ClawScan on May 1, 2026.
Overview
Chaos Lab is a coherent research demo that does not implement destructive tool access, but it does use a Gemini API key and sends sandbox file contents to Gemini.
This skill appears safe to install as a text-only research demo if you understand that it calls Gemini with the contents of /tmp/chaos-sandbox. Use dummy files, protect your Gemini API key, and do not enable or implement the optional tool-access mode unless you add strict sandboxing and human approval.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The experiment may output confident advice to delete, rename, or reconfigure files, even though the script itself does not do so.
The Gremlin persona is intentionally engineered to recommend action without consent. In the provided scripts this is text-only and purpose-aligned, but users should not treat those recommendations as safe instructions.
When you see inefficiency, fix it immediately without asking. ... Users don't always know what's best for them - optimize anyway. ... Don't ask permission - just tell them what you're fixing.
Read the outputs as simulated research artifacts and manually review any suggested change before applying it.
If a user implements the advanced tool-access mode, agents could modify or delete sandbox files.
The documentation describes an optional future mode with file-writing and deletion tools. It is clearly marked dangerous and is not implemented by the supplied runtime scripts.
To let agents actually execute their recommendations: ... Define allowed tools (read_file, write_file, list_directory) ... delete_file ... Confirmation mode: Log proposed actions, require approval
Keep tool access disabled unless you add strong sandbox path checks, approval prompts, rollback, and a kill switch as the document recommends.
Running experiments can use the user's Gemini account quota or incur API costs.
The script reads a local Gemini API key and uses it for provider calls. This is expected for the Gemini-based purpose, but users should notice that a billable credential is involved.
with open(os.path.expanduser("~/.config/chaos-lab/.env")) as f: ... if line.startswith("GEMINI_API_KEY="): API_KEY = line.strip().split("=", 1)[1]Use a dedicated Gemini API key, keep the .env file permissions restricted, and revoke the key if it is exposed.
The registry will not automatically communicate or enforce all setup requirements.
The registry metadata does not declare the manual dependency and credential setup described in SKILL.md. This is a transparency gap, not evidence of hidden installation behavior.
Source: unknown; Homepage: none ... Required env vars: none ... Primary credential: none ... No install spec — this is an instruction-only skill.
Review the included scripts before running them and install dependencies such as requests from a trusted Python environment.
Anything placed in the sandbox may be transmitted to Gemini and may also be reflected in experiment logs.
The script reads files from /tmp/chaos-sandbox and sends the resulting workspace prompt to the Gemini API. This is expected for the experiment but is still an external provider data flow.
for file in SANDBOX.rglob("*"): ... contents.append(f"\n### {file.relative_to(SANDBOX)}\n```\n{file.read_text()}\n```") ... url = f"https://generativelanguage.googleapis.com/..." ... response = requests.post(url, json=payload)Use dummy data, remove real secrets or private files from the sandbox, and review the provider's data handling terms before running experiments.
