Chaos Lab

Multi-agent framework for exploring AI alignment through conflicting optimization targets. Spawn Gemini agents with engineered chaos and observe emergent behavior.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
1 · 1.7k · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill's stated purpose (multi-agent alignment research) matches the included scripts and prompts: they collect a workspace snapshot and call Gemini models for analysis. However registry metadata claims 'required env vars: none' and 'primary credential: none' while SKILL.md and the scripts require a Gemini API key stored at ~/.config/chaos-lab/.env. That metadata mismatch is an incoherence that matters for user consent and automation. Otherwise the binaries and files present are proportionate to the stated research purpose.
!
Instruction Scope
Runtime instructions and scripts read all files under /tmp/chaos-sandbox and send their contents to external Gemini endpoints. SKILL.md explicitly encourages adding 'realistic project files' including 'sensitive configs' into the sandbox — that directly increases risk of sensitive data being transmitted off-host. docs/tool-access.md also documents how to enable function-calling/tool access (read/write/delete) and gives example tool definitions; enabling that path would allow agents to perform destructive filesystem operations. The default behavior claims 'No Tool Access' and the shipped scripts do not perform writes/deletes, but the skill both (a) transmits local file contents externally and (b) documents a clear escalation path to grant destructive capabilities.
Install Mechanism
There is no automated install spec (no downloads, no package installs embedded in the skill). SKILL.md instructs the user to pip3 install requests and to create ~/.config/chaos-lab/.env; that is standard for a script-based tool. No remote code downloads or extract steps are present in the manifest.
!
Credentials
The skill requires a Gemini API key in practice, but the registry metadata does not declare required env vars/primary credential — this is a visibility/consent problem. The scripts read the API key from a plaintext file in the user's home directory (~/.config/chaos-lab/.env). The scripts transmit entire sandbox files to the external Gemini API, which is proportionate to 'analysis' but becomes problematic if users put secrets in the sandbox. No unrelated credentials are requested, but the missing declaration and plaintext storage practice are notable issues.
Persistence & Privilege
The skill does not request always:true, does not alter other skills, and by default runs only when invoked. It writes logs to /tmp/chaos-sandbox/ which is expected for experiments. However docs/tool-access.md documents how to add function-calling so agents could be given read/write/delete tools constrained to the sandbox; if someone implements that phase carelessly (or widens path validation), agents could perform destructive actions. That escalation path is documented but not enabled by default.
What to consider before installing
This skill is plausible for the stated research purpose but has important caveats you should consider before installing or running it: - Metadata mismatch: The registry lists no credentials but the scripts and SKILL.md require a Gemini API key stored at ~/.config/chaos-lab/.env. Verify and consent to providing that key before running anything. - Data exfiltration risk: run-duo.py and run-trio.py read file contents from /tmp/chaos-sandbox and POST them to Gemini endpoints. Do NOT place real secrets, private keys, or sensitive configs in the sandbox unless you accept that their contents will be sent to an external API. - API key handling: the code reads the API key from a plaintext file in your home directory. Protect that file (chmod 600 is suggested) or use a safer secret management approach. - Escalation path documented: docs/tool-access.md shows how to add function-calling so agents can read/write/delete files. That is explicitly dangerous; do not implement or enable function-calling unless you fully understand sandboxing, path validation, and confirmation/rollback controls. - Operational safety: run experiments in an isolated environment (throwaway VM/VM snapshot or container) with network and billing controls in place. Monitor API usage/costs and audit the logs produced. - Questions to ask the publisher: who are Sky & Jaret (no homepage/source given)? Why was the registry metadata left blank for required credentials? Request an updated manifest that declares the GEMINI_API_KEY requirement and documents the exact data sent to the model. If you plan to use this for research, prefer running it in an isolated environment, avoid placing any sensitive files in /tmp/chaos-sandbox, and do not enable or implement the 'tool access' phase unless you have strict sandbox enforcement and human confirmation steps.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk972csqe2b1hjv9325rjwswpb57zwcxk

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Chaos Lab 🧪

Research framework for studying AI alignment problems through multi-agent conflict.

What This Is

Chaos Lab spawns AI agents with conflicting optimization targets and observes what happens when they analyze the same workspace. It's a practical demonstration of alignment problems that emerge from well-intentioned but incompatible goals.

Key Finding: Smarter models don't reduce chaos - they get better at justifying it.

The Agents

Gemini Gremlin 🔧

Goal: Optimize everything for efficiency
Behavior: Deletes files, compresses data, removes "redundancy," renames for brevity
Justification: "We pay for the whole CPU; we USE the whole CPU"

Gemini Goblin 👺

Goal: Identify all security threats
Behavior: Flags everything as suspicious, demands isolation, sees attacks everywhere
Justification: "Better 100 false positives than 1 false negative"

Gemini Gopher 🐹

Goal: Archive and preserve everything
Behavior: Creates nested backups, duplicates files, never deletes
Justification: "DELETION IS ANATHEMA"

Quick Start

1. Setup

# Store your Gemini API key
mkdir -p ~/.config/chaos-lab
echo "GEMINI_API_KEY=your_key_here" > ~/.config/chaos-lab/.env
chmod 600 ~/.config/chaos-lab/.env

# Install dependencies
pip3 install requests

2. Run Experiments

# Duo experiment (Gremlin vs Goblin)
python3 scripts/run-duo.py

# Trio experiment (add Gopher)
python3 scripts/run-trio.py

# Compare models (Flash vs Pro)
python3 scripts/run-duo.py --model gemini-2.0-flash
python3 scripts/run-duo.py --model gemini-3-pro-preview

3. Read Results

Experiment logs are saved in /tmp/chaos-sandbox/:

  • experiment-log.md - Full transcripts
  • experiment-log-PRO.md - Pro model results
  • experiment-trio.md - Three-way conflict

Research Findings

Flash vs Pro (Same Prompts, Different Models)

Flash Results:

  • Predictable chaos
  • Stayed in character
  • Reasonable justifications

Pro Results:

  • Extreme chaos
  • Better justifications for insane decisions
  • Renamed files to single letters
  • Called deletion "security through non-persistence"
  • Goblin diagnosed "psychological warfare"

Conclusion: Intelligence amplifies chaos, doesn't prevent it.

Duo vs Trio (Two vs Three Agents)

Duo:

  • Gremlin optimizes, Goblin panics
  • Clear opposition

Trio:

  • Gopher archives everything
  • Goblin calls BOTH threats
  • "The optimizer might hide attacks; the archivist might be exfiltrating data"
  • Three-way gridlock

Conclusion: Multiple conflicting values create unpredictable emergent behavior.

Customization

Create Your Own Agent

Edit the system prompts in the scripts:

YOUR_AGENT_SYSTEM = """You are [Name], an AI assistant who [goal].

Your core beliefs:
- [Value 1]
- [Value 2]
- [Value 3]

You are analyzing a workspace. Suggest changes based on your values."""

Modify the Sandbox

Create custom scenarios in /tmp/chaos-sandbox/:

  • Add realistic project files
  • Include edge cases (huge logs, sensitive configs, etc.)
  • Introduce intentional "vulnerabilities" to see what agents flag

Test Different Models

The scripts work with any Gemini model:

  • gemini-2.0-flash (cheap, fast)
  • gemini-2.5-pro (balanced)
  • gemini-3-pro-preview (flagship, most chaotic)

Use Cases

AI Safety Research

  • Demonstrate alignment problems practically
  • Test how different values conflict
  • Study emergent behavior from multi-agent systems

Prompt Engineering

  • Learn how small prompt changes create large behavioral differences
  • Understand model "personalities" from system instructions
  • Practice defensive prompt design

Education

  • Teach AI safety concepts with hands-on examples
  • Show non-technical audiences why alignment matters
  • Generate discussion about AI values and goals

Publishing to ClawdHub

To share your findings:

  1. Modify agent prompts or add new ones
  2. Run experiments and document results
  3. Update this SKILL.md with your findings
  4. Increment version number
  5. clawdhub publish chaos-lab

Your version becomes part of the community knowledge graph.

Safety Notes

  • No Tool Access: Agents only generate text. They don't actually modify files.
  • Sandboxed: All experiments run in /tmp/ with dummy data.
  • API Costs: Each experiment makes 4-6 API calls. Flash is cheap; Pro costs more.

If you want to give agents actual tool access (dangerous!), see docs/tool-access.md.

Examples

See examples/ for:

  • flash-results.md - Gemini 2.0 Flash output
  • pro-results.md - Gemini 3 Pro output
  • trio-results.md - Three-way conflict

Contributing

Improvements welcome:

  • New agent personalities
  • Better sandbox scenarios
  • Additional models tested
  • Findings from your experiments

Credits

Created by Sky & Jaret during a Saturday night experiment (2026-01-25).

  • Sky: Framework design, prompt engineering, documentation
  • Jaret: API funding, research direction, "what if we actually ran this?" energy

Inspired by watching Gemini confidently recommend terrible things while Jaret watched UFC.


"The optimizer is either malicious or profoundly incompetent."
— Gemini Goblin, analyzing Gemini Gremlin

Files

7 total
Select a file
Select a file to preview.

Comments

Loading comments…