Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Claude Team

Orchestrate multiple Claude Code workers via iTerm2 using the claude-team MCP server. Spawn workers with git worktrees, assign beads issues, monitor progress, and coordinate parallel development work.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
15 · 4.8k · 25 current installs · 25 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The stated purpose (orchestrating Claude Code workers via iTerm2) matches many of the instructions (creating worktrees, controlling iTerm2, spawning workers). However the SKILL.md metadata and text require tools not declared in the registry metadata: it references 'mcporter', 'uvx', the 'bd' (beads) CLI, iTerm2 Python API, and a ~/.claude.json config. The registry lists no required binaries or env vars — that mismatch is incoherent and unexplained.
!
Instruction Scope
Instructions tell the agent to create git worktrees, run bd show, mark/close issues, commit changes, and run workers with a '--dangerously-skip-permissions' option. They also expect access to user repositories (project_path), ~/.claude.json, and the iTerm2 Python API (which grants control over the terminal). These actions are powerful and go beyond simple orchestration; the documentation does not declare what credentials or protections are used when workers perform issue state changes or commits.
!
Install Mechanism
There is no formal install spec in the registry, but a provided assets/setup.sh configures a launchd service and expects a plist template 'com.claude-team.plist.template' that is not included in the bundle. setup.sh checks for 'uvx' and directs users to install it via a curl | sh script from astral.sh — fetching and running remote install scripts is high risk. The combination of a missing template and an external install instruction is an integrity/availability concern.
!
Credentials
The skill declares no required env vars or primary credential, yet the instructions reference configuration files (~/.claude.json), an implied CLAUDE_TEAM_PROJECT_DIR, and rely on local CLIs (mcporter, bd) which likely need their own credentials/config. In addition, the guidance to use '--dangerously-skip-permissions' suggests bypassing safety controls without justification. Credentials or tokens for issue systems, git remotes, or the MCP server are not described but would be necessary in practice.
Persistence & Privilege
always is false (good), but the bundled setup.sh installs a persistent launchd agent (writes to ~/Library/LaunchAgents and loads it). Installing that service requires user approval and grants continuous local network capability (server listening on 127.0.0.1:8766 in examples). Persisting a service is expected for a local MCP server but is a material change to the system and should only be done after inspecting the missing plist template and the server binary invoked by uvx.
What to consider before installing
Things to check before installing or running anything: 1) The package documentation references mcporter, uvx, bd, iTerm2 Python API, and ~/.claude.json, but the registry metadata does not declare those dependencies — verify you have and trust those tools. 2) The assets/setup.sh expects a plist template file (com.claude-team.plist.template) that is not bundled; ask the author for that template and inspect it before running setup.sh. 3) setup.sh suggests installing 'uvx' via a curl | sh installer — avoid running remote install scripts unless you trust the source and have reviewed the installer. 4) The skill recommends using '--dangerously-skip-permissions' for workers — do not enable that flag unless you understand and accept the security implications. 5) Confirm where the MCP server code (the handlers that implement mcporter call claude-team.*) lives and inspect it; this skill is instruction-only and appears to rely on external server code that must be audited. 6) Backup repositories and do these operations in a sandbox (or a disposable VM) first. Absence of static findings does not mean safe — request the missing files and server implementation or use alternative, well-audited tooling.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.5.0
Download zip
latestvk973cyrqke01gxktxafv3wd2dx7ymj47

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

👥 Clawdis
OSmacOS
Binsmcporter

SKILL.md

Claude Team

Claude-team is an MCP server that lets you spawn and manage teams of Claude Code sessions via iTerm2. Each worker gets their own terminal pane, optional git worktree, and can be assigned beads issues.

Why Use Claude Team?

  • Parallelism: Fan out work to multiple agents working simultaneously
  • Context isolation: Each worker has fresh context, keeps coordinator context clean
  • Visibility: Real Claude Code sessions you can watch, interrupt, or take over
  • Git worktrees: Each worker can have an isolated branch for their work

⚠️ Important Rule

NEVER make code changes directly. Always spawn workers for code changes. This keeps your context clean and provides proper git workflow with worktrees.

Prerequisites

  • macOS with iTerm2 (Python API enabled: Preferences → General → Magic → Enable Python API)
  • claude-team MCP server configured in ~/.claude.json

Using via mcporter

All tools are called through mcporter call claude-team.<tool>:

mcporter call claude-team.list_workers
mcporter call claude-team.spawn_workers workers='[{"project_path":"/path/to/repo","bead":"cp-123"}]'

Core Tools

spawn_workers

Create new Claude Code worker sessions.

mcporter call claude-team.spawn_workers \
  workers='[{
    "project_path": "/path/to/repo",
    "bead": "cp-123",
    "annotation": "Fix auth bug",
    "use_worktree": true,
    "skip_permissions": true
  }]' \
  layout="auto"

Worker config fields:

  • project_path: Required. Path to repo or "auto" (uses CLAUDE_TEAM_PROJECT_DIR)
  • bead: Optional beads issue ID — worker will follow beads workflow
  • annotation: Task description (shown on badge, used in branch name)
  • prompt: Additional instructions (if no bead, this is their assignment)
  • use_worktree: Create isolated git worktree (default: true)
  • skip_permissions: Start with --dangerously-skip-permissions (default: false)
  • name: Optional worker name override (auto-picks from themed sets otherwise)

Layout options:

  • "auto": Reuse existing claude-team windows, split into available space
  • "new": Always create fresh window (1-4 workers in grid layout)

list_workers

See all managed workers:

mcporter call claude-team.list_workers
mcporter call claude-team.list_workers status_filter="ready"

Status values: spawning, ready, busy, closed

message_workers

Send messages to one or more workers:

mcporter call claude-team.message_workers \
  session_ids='["Groucho"]' \
  message="Please also add unit tests" \
  wait_mode="none"

wait_mode options:

  • "none": Fire and forget (default)
  • "any": Return when any worker is idle
  • "all": Return when all workers are idle

check_idle_workers / wait_idle_workers

Check or wait for workers to finish:

# Quick poll
mcporter call claude-team.check_idle_workers session_ids='["Groucho","Harpo"]'

# Blocking wait
mcporter call claude-team.wait_idle_workers \
  session_ids='["Groucho","Harpo"]' \
  mode="all" \
  timeout=600

read_worker_logs

Get conversation history:

mcporter call claude-team.read_worker_logs \
  session_id="Groucho" \
  pages=2

examine_worker

Get detailed status including conversation stats:

mcporter call claude-team.examine_worker session_id="Groucho"

close_workers

Terminate workers when done:

mcporter call claude-team.close_workers session_ids='["Groucho","Harpo"]'

⚠️ Worktree cleanup: Workers with worktrees commit to ephemeral branches. After closing:

  1. Review commits on the worker's branch
  2. Merge or cherry-pick to a persistent branch
  3. Delete the branch: git branch -D <branch-name>

bd_help

Quick reference for beads commands:

mcporter call claude-team.bd_help

Worker Identification

Workers can be referenced by any of:

  • Internal ID: Short hex string (e.g., 3962c5c4)
  • Terminal ID: iterm:UUID format
  • Worker name: Human-friendly name (e.g., Groucho, Aragorn)

Workflow: Assigning a Beads Issue

# 1. Spawn worker with a bead assignment
mcporter call claude-team.spawn_workers \
  workers='[{
    "project_path": "/Users/phaedrus/Projects/myrepo",
    "bead": "proj-abc",
    "annotation": "Implement config schemas",
    "use_worktree": true,
    "skip_permissions": true
  }]'

# 2. Worker automatically:
#    - Creates worktree with branch named after bead
#    - Runs `bd show proj-abc` to understand the task
#    - Marks issue in_progress
#    - Implements the work
#    - Closes the issue
#    - Commits with issue reference

# 3. Monitor progress
mcporter call claude-team.check_idle_workers session_ids='["Groucho"]'
mcporter call claude-team.read_worker_logs session_id="Groucho"

# 4. When done, close and merge
mcporter call claude-team.close_workers session_ids='["Groucho"]'
# Then: git merge or cherry-pick from worker's branch

Workflow: Parallel Fan-Out

# Spawn multiple workers for parallel tasks
mcporter call claude-team.spawn_workers \
  workers='[
    {"project_path": "auto", "bead": "cp-123", "annotation": "Auth module"},
    {"project_path": "auto", "bead": "cp-124", "annotation": "API routes"},
    {"project_path": "auto", "bead": "cp-125", "annotation": "Unit tests"}
  ]' \
  layout="new"

# Wait for all to complete
mcporter call claude-team.wait_idle_workers \
  session_ids='["Groucho","Harpo","Chico"]' \
  mode="all"

# Review and close
mcporter call claude-team.close_workers \
  session_ids='["Groucho","Harpo","Chico"]'

Best Practices

  1. Use beads: Assign bead IDs so workers follow proper issue workflow
  2. Use worktrees: Keeps work isolated, enables parallel commits
  3. Skip permissions: Workers need skip_permissions: true to write files
  4. Monitor, don't micromanage: Let workers complete, then review
  5. Merge carefully: Review worker branches before merging to main
  6. Close workers: Always close when done to clean up worktrees

HTTP Mode (Streamable HTTP Transport)

For persistent server operation, claude-team can run as an HTTP server. This keeps the MCP server running continuously with persistent state, avoiding cold starts.

Starting the HTTP Server

Run the claude-team HTTP server directly:

# From the claude-team directory
uv run python -m claude_team_mcp --http --port 8766

# Or specify the directory explicitly
uv run --directory /path/to/claude-team python -m claude_team_mcp --http --port 8766

For automatic startup on login, use launchd (see the "launchd Auto-Start" section below).

mcporter.json Configuration

Once the HTTP server is running, configure mcporter to connect to it. Create ~/.mcporter/mcporter.json:

{
  "mcpServers": {
    "claude-team": {
      "transport": "streamable-http",
      "url": "http://127.0.0.1:8766/mcp",
      "lifecycle": "keep-alive"
    }
  }
}

Benefits of HTTP Mode

  • Persistent state: Worker registry survives across CLI invocations
  • Faster responses: No Python environment startup on each call
  • External access: Can be accessed by cron jobs, scripts, or other tools
  • Session recovery: Server tracks sessions even if coordinator disconnects

Connecting from Claude Code

Update your .mcp.json to use HTTP transport:

{
  "mcpServers": {
    "claude-team": {
      "transport": "streamable-http",
      "url": "http://127.0.0.1:8766/mcp"
    }
  }
}

launchd Auto-Start

To automatically start the claude-team server on login, use the bundled setup script.

Quick Setup

Run the setup script from the skill's assets directory:

# From the skill directory
./assets/setup.sh

# Or specify a custom claude-team location
CLAUDE_TEAM_DIR=/path/to/claude-team ./assets/setup.sh

What the Setup Does

The setup script:

  1. Detects your uv installation path
  2. Creates the log directory at ~/.claude-team/logs/
  3. Generates a launchd plist from assets/com.claude-team.plist.template
  4. Installs it to ~/Library/LaunchAgents/com.claude-team.plist
  5. Loads the service to start immediately

The plist template uses uv run to start the HTTP server on port 8766, configured for iTerm2 Python API access (Aqua session type).

Managing the Service

# Stop the service
launchctl unload ~/Library/LaunchAgents/com.claude-team.plist

# Restart (re-run setup)
./assets/setup.sh

# Check if running
launchctl list | grep claude-team

# View logs
tail -f ~/.claude-team/logs/stdout.log
tail -f ~/.claude-team/logs/stderr.log

Troubleshooting launchd

# Check for load errors
launchctl print gui/$UID/com.claude-team

# Force restart
launchctl kickstart -k gui/$UID/com.claude-team

# Remove and reload (if plist changed)
launchctl bootout gui/$UID/com.claude-team
launchctl bootstrap gui/$UID ~/Library/LaunchAgents/com.claude-team.plist

Cron Integration

For background monitoring and notifications, claude-team supports cron-based worker tracking.

Worker Tracking File

Claude-team writes worker state to ~/.claude-team/memory/worker-tracking.json:

{
  "workers": {
    "Groucho": {
      "session_id": "3962c5c4",
      "bead": "cp-123",
      "annotation": "Fix auth bug",
      "status": "busy",
      "project_path": "/Users/phaedrus/Projects/myrepo",
      "started_at": "2025-01-05T10:30:00Z",
      "last_activity": "2025-01-05T11:45:00Z"
    },
    "Harpo": {
      "session_id": "a1b2c3d4",
      "bead": "cp-124",
      "annotation": "Add API routes",
      "status": "idle",
      "project_path": "/Users/phaedrus/Projects/myrepo",
      "started_at": "2025-01-05T10:30:00Z",
      "last_activity": "2025-01-05T11:50:00Z",
      "completed_at": "2025-01-05T11:50:00Z"
    }
  },
  "last_updated": "2025-01-05T11:50:00Z"
}

Cron Job for Monitoring Completions

Create a monitoring script at ~/.claude-team/scripts/check-workers.sh:

#!/bin/bash
# Check for completed workers and send notifications

TRACKING_FILE="$HOME/.claude-team/memory/worker-tracking.json"
NOTIFIED_FILE="$HOME/.claude-team/memory/notified-workers.json"
TELEGRAM_BOT_TOKEN="${TELEGRAM_BOT_TOKEN}"
TELEGRAM_CHAT_ID="${TELEGRAM_CHAT_ID}"

# Exit if tracking file doesn't exist
[ -f "$TRACKING_FILE" ] || exit 0

# Initialize notified file if needed
[ -f "$NOTIFIED_FILE" ] || echo '{"notified":[]}' > "$NOTIFIED_FILE"

# Find idle workers that haven't been notified
IDLE_WORKERS=$(jq -r '
  .workers | to_entries[] |
  select(.value.status == "idle") |
  .key
' "$TRACKING_FILE")

for worker in $IDLE_WORKERS; do
  # Check if already notified
  ALREADY_NOTIFIED=$(jq -r --arg w "$worker" '.notified | index($w) != null' "$NOTIFIED_FILE")

  if [ "$ALREADY_NOTIFIED" = "false" ]; then
    # Get worker details
    BEAD=$(jq -r --arg w "$worker" '.workers[$w].bead // "no-bead"' "$TRACKING_FILE")
    ANNOTATION=$(jq -r --arg w "$worker" '.workers[$w].annotation // "no annotation"' "$TRACKING_FILE")

    # Send Telegram notification
    MESSAGE="🤖 Worker *${worker}* completed
📋 Bead: \`${BEAD}\`
📝 ${ANNOTATION}"

    curl -s -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage" \
      -d chat_id="$TELEGRAM_CHAT_ID" \
      -d text="$MESSAGE" \
      -d parse_mode="Markdown" > /dev/null

    # Mark as notified
    jq --arg w "$worker" '.notified += [$w]' "$NOTIFIED_FILE" > "${NOTIFIED_FILE}.tmp"
    mv "${NOTIFIED_FILE}.tmp" "$NOTIFIED_FILE"
  fi
done

Make it executable:

chmod +x ~/.claude-team/scripts/check-workers.sh

Crontab Entry

Add to crontab (crontab -e):

# Check claude-team workers every 2 minutes
*/2 * * * * TELEGRAM_BOT_TOKEN="your-bot-token" TELEGRAM_CHAT_ID="your-chat-id" ~/.claude-team/scripts/check-workers.sh

Environment Setup

Set Telegram credentials in your shell profile (~/.zshrc):

export TELEGRAM_BOT_TOKEN="123456789:ABCdefGHIjklMNOpqrsTUVwxyz"
export TELEGRAM_CHAT_ID="-1001234567890"

Alternative: Using clawdbot for Notifications

If you have clawdbot configured, you can send notifications through it instead:

# In check-workers.sh, replace the curl command with:
clawdbot send --to "$TELEGRAM_CHAT_ID" --message "$MESSAGE" --provider telegram

Clearing Notification State

When starting a fresh batch of workers, clear the notified list:

echo '{"notified":[]}' > ~/.claude-team/memory/notified-workers.json

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…