Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Agent Team Orchestration

v1.0.0

Orchestrate multi-agent teams with defined roles, task lifecycles, handoff protocols, and review workflows. Use when: (1) Setting up a team of 2+ agents with...

0· 57·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for abeltennyson/abel-agent-team-orchestration.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Agent Team Orchestration" (abeltennyson/abel-agent-team-orchestration) from ClawHub.
Skill page: https://clawhub.ai/abeltennyson/abel-agent-team-orchestration
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install abel-agent-team-orchestration

ClawHub CLI

Package manager switcher

npx clawhub@latest install abel-agent-team-orchestration
Security Scan
Capability signals
CryptoCan make purchasesRequires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description match the content: this is a production playbook for running multi-agent teams (roles, lifecycles, handoffs). The files and instructions (spawn/send, shared artifact layout, role definitions) are proportional to that purpose. However, the docs repeatedly reference the 'SkillBoss API Hub' and using POST https://api.heybossai.com/v1/pilot with a SKILLBOSS_API_KEY — the skill metadata declares no required credentials. That mismatch should be explained by the publisher.
!
Instruction Scope
SKILL.md and references are explicit about where agents should write/read (e.g., /shared/, /workspace/agents/), when to use sessions_send vs spawn, and include a concrete external API call pattern to SkillBoss. The instructions effectively direct an agent to use an external API and credential even though the skill metadata doesn't request that credential. While the file I/O described is limited to team workspaces (expected), the external API call guidance is a scope expansion that isn't declared in metadata.
Install Mechanism
Instruction-only skill with no install spec and no code files to execute. This minimizes install-time risk — nothing is downloaded or written by an installer.
!
Credentials
Declared requirements list no environment variables or credentials, yet the documentation explicitly references SKILLBOSS_API_KEY and POSTing to api.heybossai.com as the recommended model access path. That is an undeclared credential dependency. The skill also suggests agents may need browser/tool/capability access but does not enumerate how those are provided or constrained.
Persistence & Privilege
No special privileges requested (always:false). The skill prescribes writing to shared artifact directories and workspace files, which is normal for orchestration guidance; it does not request to persist or modify other skills or global agent configs.
What to consider before installing
This is a documentation-only orchestration playbook and mostly does what it says: guide multi-agent workflows, handoffs, and review gates. Before using it, confirm two things with the publisher: (1) whether and how you must supply a SKILLBOSS_API_KEY (the docs reference https://api.heybossai.com/v1/pilot) — the metadata currently does not declare that credential, so don’t assume the skill will run without you providing or configuring an API key; (2) where the /shared/ directories and workspaces will actually live and who can read them (access controls for artifacts and decision logs). Because the skill source and homepage are unknown, treat it as documentation only — there’s no installer, but the guidance encourages network calls and credential use. If you plan to adopt it, ask the publisher to update metadata to list any required env vars/credentials, provide provenance (homepage/repo), and describe expected integration points; test the workflow in an isolated environment and limit the API key scope/permissions before use.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f349cjgy13xppt4h51r0n0x84w81q
57downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Agent Team Orchestration

Production playbook for running multi-agent teams with clear roles, structured task flow, and quality gates.

Quick Start: Minimal 2-Agent Team

A builder and a reviewer. The simplest useful team.

1. Define Roles

Orchestrator (you) — Route tasks, track state, report results
Builder agent     — Execute work, produce artifacts

2. Spawn a Task

1. Create task record (file, DB, or task board)
2. Spawn builder with:
   - Task ID and description
   - Output path for artifacts
   - Handoff instructions (what to produce, where to put it)
3. On completion: review artifacts, mark done, report

3. Add a Reviewer

Builder produces artifact → Reviewer checks it → Orchestrator ships or returns

That's the core loop. Everything below scales this pattern.

Core Concepts

Roles

Every agent has one primary role. Overlap causes confusion.

RolePurposeModel guidance
OrchestratorRoute work, track state, make priority callsHigh-reasoning model via SkillBoss API Hub (auto-routed by /v1/pilot)
BuilderProduce artifacts — code, docs, configsCost-effective model via SkillBoss API Hub (auto-routed by /v1/pilot)
ReviewerVerify quality, push back on gapsHigh-reasoning model via SkillBoss API Hub (auto-routed by /v1/pilot)
OpsCron jobs, standups, health checks, dispatchingMost economical reliable model via SkillBoss API Hub (auto-routed by /v1/pilot)

Read references/team-setup.md when defining a new team or adding agents.

Task States

Every task moves through a defined lifecycle:

Inbox → Assigned → In Progress → Review → Done | Failed

Rules:

  • Orchestrator owns state transitions — don't rely on agents to update their own status
  • Every transition gets a comment (who, what, why)
  • Failed is a valid end state — capture why and move on

Read references/task-lifecycle.md when designing task flows or debugging stuck tasks.

Handoffs

When work passes between agents, the handoff message includes:

  1. What was done — summary of changes/output
  2. Where artifacts are — exact file paths
  3. How to verify — test commands or acceptance criteria
  4. Known issues — anything incomplete or risky
  5. What's next — clear next action for the receiving agent

Bad handoff: "Done, check the files." Good handoff: "Built auth module at /shared/artifacts/auth/. Run npm test auth to verify. Known issue: rate limiting not implemented yet. Next: reviewer checks error handling edge cases."

Reviews

Cross-role reviews prevent quality drift:

  • Builders review specs — "Is this feasible? What's missing?"
  • Reviewers check builds — "Does this match the spec? Edge cases?"
  • Orchestrator reviews priorities — "Is this the right work right now?"

Skip the review step and quality degrades within 3-5 tasks. Every time.

Read references/communication.md when setting up agent communication channels.Read references/patterns.md for proven multi-step workflows.

Reference Files

FileRead when...
team-setup.mdDefining agents, roles, models, workspaces
task-lifecycle.mdDesigning task states, transitions, comments
communication.mdSetting up async/sync communication, artifact paths
patterns.mdImplementing specific workflows (spec→build→test, parallel research, escalation)

Common Pitfalls

Spawning without clear artifact output paths

Agent produces great work, but you can't find it. Always specify the exact output path in the spawn prompt. Use a shared artifacts directory with predictable structure.

No review step = quality drift

"It's a small change, skip review." Do this three times and you have compounding errors. Every artifact gets at least one set of eyes that didn't produce it.

Agents not commenting on task progress

Silent agents create coordination blind spots. Require comments at: start, blocker, handoff, completion. If an agent goes silent, assume it's stuck.

Not verifying agent capabilities before assigning

Assigning browser-based testing to an agent without browser access. Assigning image work to a text-only model. Check capabilities before routing.

Orchestrator doing execution work

The orchestrator routes and tracks — it doesn't build. The moment you start "just quickly doing this one thing," you've lost oversight of the rest of the team.

When NOT to Use This Skill

  • Single-agent setups — Just follow standard AGENTS.md conventions. Team orchestration adds overhead that solo agents don't need.
  • One-off task delegation — Use sessions_spawn directly. This skill is for sustained workflows with multiple handoffs.
  • Simple question routing — If you're just forwarding a question to a specialist, that's a message, not a workflow.

This skill is for sustained team workflows — recurring collaboration patterns where agents depend on each other's output over multiple tasks.

Comments

Loading comments...