Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Openclaw Team Builder

Discover, compose, and activate specialist teams from 3 rosters — OpenClaw Core (CEO/IG/Artist), Agency Division (55+ specialists), and Research Lab (autonom...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 21 · 0 current installs · 0 all-time installs
byJoe Szeles@JoeSzeles
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The stated purpose is to propose and activate specialist teams (planner/reviewer/orchestrator). That scope does not legitimately explain some capabilities the documents claim: live market access and trade execution (scalper bot control, Config Write API), use of particular models/services (openai/gpt-4o, xAI grok image APIs), and persistent workspaces under .openclaw. A team-composition skill would not itself need direct trading credentials or cloud/model API keys; those are missing from the declared requirements and therefore disproportionate to the stated purpose.
!
Instruction Scope
SKILL.md and the companion docs instruct agents to read local reference files, adopt other agents' identities, read/write from .openclaw workspaces, 'read current strategy config from IG dashboard', run autonomous Research Lab loops ('LOOP FOREVER'), and promote configs via a 'Config Write API'. The skill's instructions reference external services and system paths outside the skill folder without declaring how to access them, and include an autonomous infinite-loop pattern for experiments which could run indefinitely if invoked.
Install Mechanism
No install spec and no code files are included (instruction-only). This reduces risk from arbitrary downloads or installs. However, the absence of an install step amplifies the other inconsistencies (the skill claims capabilities that would normally require installed clients or credentials).
!
Credentials
The skill declares no required environment variables or credentials, yet the content repeatedly references services that would need secrets (trading accounts/APIs for IG/scalper control, OpenAI/xAI API access for models and image generation, possible Config Write API credentials). This mismatch (capability claims vs. zero credential requirements) is a key incoherence and could hide assumptions about platform-provided privileges or be an engineering oversight.
Persistence & Privilege
The skill is not flagged always:true and is user-invocable (defaults). That is appropriate. However, internal docs describe 'always running' Core agents and an autonomous Research Lab 'LOOP FOREVER' experiment loop — if activated on a platform that permits long-running/autonomous execution, that could lead to indefinite or repeated actions (including config writes and trading experiments). The skill metadata does not request elevated persistence, but the behavioral docs imply it.
What to consider before installing
This skill is mostly documentation for composing teams, which is fine, but there are important mismatches you should clarify before installing or using it: - Ask the author how the skill expects to access external services: where do trading credentials, OpenAI/xAI keys, or a 'Config Write API' token come from? The skill declares none. - Confirm whether the referenced reference/agency-agents-main files and any required workspaces (e.g., .openclaw/workspace/) are actually bundled or available on your platform. The SKILL references many file paths that are not present in the package manifest. - Be cautious about activating Research Lab workflows: the docs describe an autonomous, indefinite loop that can modify configs and run experiments. If the platform grants the skill live execution or the ability to write configs, this could cause continuous actions against trading systems or other services. Refuse or sandbox such behavior unless you trust the source and have safe limits (rate limits, timeouts, least privileges). - Do not grant trading or account credentials to this skill unless you fully trust the publisher and have audited how those credentials will be used. Prefer least-privilege API keys and test in a sandbox account. - If you want to proceed: require the author to add explicit 'required env vars' and an install/usage README explaining how credentials are provided, what external APIs are invoked, and what safety bounds (timeouts, budget limits, experiment caps) protect autonomous loops. Given the above mismatches (capability claims vs. zero declared credentials and implicit ability to change live configs/trades), treat installation as suspicious until the author explains and fixes these gaps.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97bx33gyb6rh7s5kewzrwtfa5831444

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Team Builder

Compose the right team for any job by drawing from three rosters of specialists. The Planner analyzes incoming work and proposes an optimal team; the Reviewer validates deliverables before sign-off.

Quick Start

  1. Receive a task — any job, project, or request
  2. Read PLANNER.md in this skill folder — follow the workflow to classify the domain and propose a team
  3. Activate specialists — load the relevant agent definition from the reference files listed below
  4. Execute — hand off work through the team using the handoff templates
  5. Review — read REVIEWER.md and run the QA workflow before final delivery

The Three Rosters

1. Core Team (TEAM-CORE.md)

The permanent OpenClaw agents. Always available, always running.

AgentRoleWorkspace
CEOLeader, orchestrator, final authority.openclaw/workspace/
IGTrading specialist, market operations.openclaw/workspace-ig/
ArtistImage generation, visual analysis.openclaw/workspace-artist/

2. Agency Division (TEAM-AGENCY.md)

55+ specialist agents across 9 divisions. Activated on demand from reference/agency-agents-main/.

DivisionAgentsKey Specialists
Engineering7Frontend Developer, Backend Architect, AI Engineer, DevOps
Design7UI Designer, UX Architect, Image Prompt Engineer
Marketing8Growth Hacker, Content Creator, Social Media
Product3Sprint Prioritizer, Trend Researcher, Feedback Synthesizer
Project Management5Senior PM, Studio Producer, Experiment Tracker
Testing7Evidence Collector, Reality Checker, API Tester
Support6Analytics Reporter, Finance Tracker, Legal Compliance
Spatial Computing6XR Architect, visionOS Engineer
Specialized7Agents Orchestrator, Data Analytics, LSP Engineer

3. Research Lab (TEAM-RESEARCH.md)

Autonomous experiment methodology adapted from Karpathy's autoresearch. Run iterative experiment loops on any measurable problem.

Applicable to: trading strategy optimization, image analysis pipelines, model tuning, data analysis, any domain with a measurable metric.

Cross-Team Workflows

The real power is mixing specialists across rosters. Here are proven combinations:

Trading Strategy Optimization

IG (market context) + Research Lab (experiment loop) + AI Engineer (model tuning)
→ IG provides live market data and strategy parameters
→ Research Lab runs 5-min experiment iterations on backtests
→ AI Engineer tunes neural network parameters
→ Reviewer validates with evidence before deploying to live

Visual Content Pipeline

Artist (image generation) + Image Prompt Engineer (prompt crafting) + Visual Storyteller (narrative)
→ Image Prompt Engineer crafts detailed, structured prompts
→ Artist generates via xAI grok-imagine-image-pro
→ Visual Storyteller evaluates narrative quality
→ Iterate until quality threshold met

Astronomy / Image Analysis

Artist (image capture/generation) + Research Lab (analysis loop) + AI Engineer (classification)
→ Artist handles image acquisition and enhancement
→ Research Lab runs iterative analysis (feature detection, classification)
→ AI Engineer builds/tunes detection models
→ Results feed back for next iteration

Dashboard / UI Feature Build

Senior PM (scope) + Frontend Developer (build) + Evidence Collector (QA) + Reality Checker (sign-off)
→ PM breaks spec into tasks with acceptance criteria
→ Frontend Developer implements mobile-first
→ Evidence Collector screenshots and validates each task
→ Reality Checker gives final production-readiness verdict

Full Product Launch

CEO (orchestrate) + Engineering (build) + Design (UX) + Marketing (launch) + Testing (validate)
→ CEO activates Planner to scope the project
→ Engineering + Design work in parallel (Dev↔QA loops)
→ Marketing prepares launch materials
→ Reviewer signs off before go-live

Activating a Specialist

To activate any Agency specialist, load their definition file:

Read the agent definition at:
reference/agency-agents-main/[division]/[agent-file].md

Then adopt that agent's:
- Identity and personality
- Core mission and rules
- Workflow process
- Success metrics

File paths follow the pattern:

  • reference/agency-agents-main/engineering/engineering-frontend-developer.md
  • reference/agency-agents-main/design/design-image-prompt-engineer.md
  • reference/agency-agents-main/testing/testing-evidence-collector.md

See TEAM-AGENCY.md for the complete index with all file paths.

Handoff Protocol

When passing work between specialists, use this template:

## Handoff
| Field | Value |
|-------|-------|
| From | [Agent Name] |
| To | [Agent Name] |
| Task | [What needs to be done] |
| Priority | [Critical / High / Medium / Low] |

## Context
- Current state: [What's been done]
- Relevant files: [File paths]
- Constraints: [Limits, requirements]

## Deliverable
- What is needed: [Specific output]
- Acceptance criteria:
  - [ ] [Criterion 1]
  - [ ] [Criterion 2]

## Quality
- Evidence required: [What proof of completion looks like]
- Reviewer: [Who validates this deliverable]

For complete handoff templates (QA pass/fail, escalation, phase gates), see: reference/agency-agents-main/strategy/coordination/handoff-templates.md

NEXUS Pipeline Modes

For larger projects, use the NEXUS pipeline from the Agency framework:

ModeScaleAgentsTimeline
MicroSingle task/fix5-101-5 days
SprintFeature or MVP15-252-6 weeks
FullComplete productAll12-24 weeks

Pipeline phases: Discover → Strategize → Scaffold → Build → Harden → Launch → Operate

Quality gates between every phase. Evidence required for all assessments.

For full NEXUS strategy: reference/agency-agents-main/strategy/nexus-strategy.md For activation prompts: reference/agency-agents-main/strategy/coordination/agent-activation-prompts.md For quickstart: reference/agency-agents-main/strategy/QUICKSTART.md

Reference Files in This Skill

FileContents
SKILL.mdThis file — overview and quick start
TEAM-CORE.mdCEO/IG/Artist trio — roles, routing, interactions
TEAM-AGENCY.mdAll 55+ Agency specialists indexed by division
TEAM-RESEARCH.mdAutonomous experiment methodology
PLANNER.mdJob analysis → team proposal workflow
REVIEWER.mdQA validation workflow with quality gates

Files

7 total
Select a file
Select a file to preview.

Comments

Loading comments…