OC Team Builder

Discover, compose, and activate specialist teams from 3 rosters — OpenClaw Core (CEO/Artist), Agency Division (55+ specialists), and Research Lab (autonomous...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 21 · 0 current installs · 0 all-time installs
byJoe Szeles@JoeSzeles
duplicate of @JoeSzeles/openclaw-team-builder-skill (based on 1.0.0)
MIT-0
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (team composition + Research Lab/autoresearch) match the provided files and scripts: roster/plan/activate/review scripts and a research experiment runner. Required binaries (bash) are appropriate for shell scripts. No unrelated credentials, binaries, or install steps are requested.
Instruction Scope
SKILL.md and the scripts direct the agent to list/print agent definitions, propose teams, activate personality files, run QA reviews (writing logs to ~/.openclaw/team-reviews/), and run autoresearch-style experiment loops. The experiment workflow explicitly edits an "in-scope" file (default: train.py), commits to git branches, runs arbitrary run-commands (default: 'uv run train.py'), extracts metrics, and may 'loop forever' if used as described. Those actions are coherent with autonomous research but imply file modification, git history mutation, and potentially heavy resource use — all of which are beyond mere read-only team composition.
Install Mechanism
No external install spec is provided; the package is distribution of shell scripts and markdown files. No downloads or archive extraction are performed by the skill itself. This is low-risk from an install-source perspective, but the scripts will execute local commands on the host when run.
Credentials
The skill declares no required environment variables or secrets. Scripts optionally respect OPENCLAW_AGENCY_DIR to locate agent definitions, which is a reasonable override. Nothing requests cloud or unrelated credentials. The experiment runner executes user-specified run commands and metric extraction, which can access anything the invoking user has access to — expected but worth noting.
Persistence & Privilege
always:false (normal). The scripts create git branches, commit changes, write ledgers (results.tsv) and logs in project directories and ~/.openclaw/team-reviews/. They do not modify other skills' configs, but their file-write and git operations are powerful: they persist changes to repositories and can run resource-intensive jobs. This level of privilege is coherent with the Research Lab purpose but should be treated carefully.
Assessment
This skill appears to do what it says, but it contains scripts that will modify local git repositories, commit/revert code, run arbitrary experiment commands, and write logs (including an explicit 'NEVER STOP' autonomous research loop in the documentation). Before using: 1) Review the scripts (especially experiment.sh and its default run-cmd) and understand the default commands (uv run train.py / grep metric). 2) Run experiments only in isolated test projects or clones (not in production repos or with sensitive data). 3) Ensure you have appropriate resource/time budgets — experiments can be CPU/GPU and time intensive and the script may timeout/kill runs but also can be configured to run repeatedly. 4) Back up or work on disposable branches before allowing commits; expect the script to create branches and commit changes. 5) If you do not want code modified autonomously, do not run the experiment --run flow or remove/lock write/commit permissions. 6) Confirm no sensitive secrets or credentials exist in the project directories the scripts will touch. If you want greater assurance, request the missing scripts that were truncated (plan.sh, review.sh, roster.sh) and a full audit of any default run commands and their downstream effects.

Like a lobster shell, security has layers — review code before you run it.

Current versionv2.0.0
Download zip
latestvk97bjc3b73k8cfbhhqragdtm01831fgm

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🏗️ Clawdis
Binsbash

SKILL.md

Team Builder

Compose the right team for any job by drawing from three rosters of specialists. The Research Lab uses Karpathy's autoresearch methodology for autonomous experiment loops.

Quick Start — Scripts

1. Browse available agents

bash {baseDir}/scripts/roster.sh                     # all 3 rosters
bash {baseDir}/scripts/roster.sh -r agency            # agency only
bash {baseDir}/scripts/roster.sh -d engineering        # one division
bash {baseDir}/scripts/roster.sh -s "frontend"         # search
bash {baseDir}/scripts/roster.sh -v                    # verbose descriptions
bash {baseDir}/scripts/roster.sh -j                    # JSON output

2. Generate a team proposal

bash {baseDir}/scripts/plan.sh "Build a portfolio dashboard with pie charts"
bash {baseDir}/scripts/plan.sh --mode sprint "Optimize image generation prompts using autoresearch"
bash {baseDir}/scripts/plan.sh -o proposal.md "Analyze astronomy photos for star classification"

The planner auto-detects task domains (engineering, creative, research, marketing, operations, spatial) and proposes the right-sized team (micro/sprint/full).

3. Activate a specialist

bash {baseDir}/scripts/activate.sh --division engineering --agent frontend-developer
bash {baseDir}/scripts/activate.sh --division testing --agent evidence-collector
bash {baseDir}/scripts/activate.sh --division testing --list
bash {baseDir}/scripts/activate.sh --file reference/agency-agents-main/design/design-ui-designer.md
bash {baseDir}/scripts/activate.sh --division engineering --agent ai-engineer --personality-only

Outputs the agent's full personality definition for use in delegation prompts.

4. Run QA review

bash {baseDir}/scripts/review.sh --task "Portfolio dashboard"
bash {baseDir}/scripts/review.sh --task "Image pipeline" --criteria criteria.txt --pass evidence
bash {baseDir}/scripts/review.sh --task "LLM training optimization" --pass reality
bash {baseDir}/scripts/review.sh --task "Full product" --pass both -o review.md

Generates review checklists (Evidence Collector Pass 1 + Reality Checker Pass 2) and logs to ~/.openclaw/team-reviews/.

5. Run a Research Lab experiment

bash {baseDir}/scripts/experiment.sh --setup /path/to/project     # initialize experiment
bash {baseDir}/scripts/experiment.sh --run /path/to/project       # run one experiment cycle
bash {baseDir}/scripts/experiment.sh --status /path/to/project    # show ledger

See references/TEAM-RESEARCH.md for the full autoresearch methodology and working examples.

The Three Rosters

1. Core Team (references/TEAM-CORE.md)

The permanent OpenClaw agents. Always available, always running.

AgentRoleWorkspace
CEOLeader, orchestrator, final authority.openclaw/workspace/
ArtistImage generation, visual analysis.openclaw/workspace-artist/

2. Agency Division (references/TEAM-AGENCY.md)

55+ specialist agents across 9 divisions. Activated on demand from reference/agency-agents-main/.

DivisionAgentsKey Specialists
Engineering7Frontend Developer, Backend Architect, AI Engineer, DevOps
Design7UI Designer, UX Architect, Image Prompt Engineer
Marketing8Growth Hacker, Content Creator, Social Media
Product3Sprint Prioritizer, Trend Researcher, Feedback Synthesizer
Project Management5Senior PM, Studio Producer, Experiment Tracker
Testing7Evidence Collector, Reality Checker, API Tester
Support6Analytics Reporter, Finance Tracker, Legal Compliance
Spatial Computing6XR Architect, visionOS Engineer
Specialized7Agents Orchestrator, Data Analytics, LSP Engineer

3. Research Lab (references/TEAM-RESEARCH.md)

Autonomous experiment loops adapted from Karpathy's autoresearch. Set up a measurable experiment, run it in a fixed time budget, keep improvements, discard failures, loop forever.

Source code reference: reference/autoresearch-master/ (program.md, train.py, prepare.py)

Cross-Team Workflow Examples

Image Analysis + Research Loop

Artist (image acquisition) + Research Lab (analysis loop) + AI Engineer (classification)

Visual Content Pipeline

Artist (generation) + Image Prompt Engineer (prompts) + Visual Storyteller (narrative)

Dashboard / UI Feature Build

Senior PM (scope) + Frontend Developer (build) + Evidence Collector (QA)

Autonomous LLM Training (autoresearch)

Research Lab (experiment loop on train.py) + AI Engineer (architecture suggestions)
→ 12 experiments/hour, ~100 overnight, fully autonomous

Full Product Launch

CEO (orchestrate) + Engineering (build) + Design (UX) + Marketing (launch) + Testing (validate)

Handoff Protocol

When passing work between specialists:

## Handoff
| Field | Value |
|-------|-------|
| From | [Agent Name] |
| To | [Agent Name] |
| Task | [What needs to be done] |
| Priority | [Critical / High / Medium / Low] |

## Context
- Current state: [What's been done]
- Relevant files: [File paths]

## Deliverable
- What is needed: [Specific output]
- Acceptance criteria:
  - [ ] [Criterion 1]
  - [ ] [Criterion 2]

## Quality
- Evidence required: [What proof looks like]
- Reviewer: [Who validates]

For complete handoff templates: reference/agency-agents-main/strategy/coordination/handoff-templates.md

NEXUS Pipeline Modes

ModeScaleAgentsTimeline
MicroSingle task/fix1-3Hours-days
SprintFeature or MVP5-101-2 weeks
FullComplete product10+Weeks-months

Reference Files

FileContents
SKILL.mdThis file — overview, scripts, quick start
scripts/roster.shBrowse and search all agent rosters
scripts/plan.shGenerate team proposals from task descriptions
scripts/activate.shLoad agent personality definitions
scripts/review.shGenerate QA review checklists
scripts/experiment.shRun autoresearch experiment loops
references/TEAM-CORE.mdCEO/Artist — roles and interactions
references/TEAM-AGENCY.mdAll 55+ Agency specialists indexed by division
references/TEAM-RESEARCH.mdAutonomous experiment methodology (autoresearch)
references/PLANNER.mdJob analysis → team proposal workflow (detailed)
references/REVIEWER.mdQA validation workflow with quality gates
references/PROOF-OF-WORK.mdExample proposals showing cross-roster teams

Files

12 total
Select a file
Select a file to preview.

Comments

Loading comments…