Install
openclaw skills install swarm-orchestratorAI Agent cluster orchestration platform - manage, schedule, and coordinate multiple AI agents locally with FastAPI backend and React dashboard
openclaw skills install swarm-orchestrator状态: 🟢 本地优先的 AI 智能体编排平台 类型: 自托管,无需外部依赖 隐私: 100% 本地处理(可选的大模型 API 调用除外)
./data/swarm.db(仅本地)./logs/(仅本地)OpenClaw Swarm Orchestrator 是一个本地优先的平台,用于构建和管理多智能体 AI 系统。可以把它想象成协调多个 AI 智能体协同工作的"控制塔"。
┌─────────────────┐
│ Web 仪表板 │ http://localhost:3000
│ (React UI) │
└────────┬────────┘
│
┌────────▼────────┐
│ FastAPI 服务器 │ http://localhost:8000
│ (后端 API) │
└────────┬────────┘
│
┌────────▼────────┐
│ 本地存储 │
│ • SQLite 数据库 │ ./data/swarm.db
│ • Redis 缓存 │ localhost:6379
│ • 日志文件 │ ./logs/*.log
└─────────────────┘
安装前,请验证您已安装:
# Python 3.11+
python --version
# Node.js 18+
node --version
# Redis
redis-cli ping # 应该返回 PONG
# (可选) Docker
docker --version
这是最安全的方法 - 所有内容都在容器中运行。
# 1. 克隆仓库(先检查代码!)
git clone https://github.com/ZhenRobotics/openclaw-swarm-orchestrator.git
cd openclaw-swarm-orchestrator
# 2. 启动前查看 docker-compose.yml
cat docker-compose.yml
# 3. 启动服务
docker-compose up -d
# 4. 验证
curl http://localhost:8000/health
# 应该返回: {"status": "healthy"}
访问:
# 1. 通过 npm 安装(审查包后)
npm view openclaw-swarm-orchestrator # 安装前查看
npm install -g openclaw-swarm-orchestrator
# 2. 验证安装
swarm-orchestrator --version
# 3. 启动服务(在不同终端中)
# 终端 1: 启动 Redis
redis-server
# 终端 2: 启动后端
cd backend
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
uvicorn app.main:app --reload
# 终端 3: 启动前端
cd frontend
npm install
npm run dev
# 1. 克隆并检查
git clone https://github.com/ZhenRobotics/openclaw-swarm-orchestrator.git
cd openclaw-swarm-orchestrator
# 2. 验证提交哈希(安全检查)
git log -1 --format="%H"
# 应该是: acae6e5... (或最新发布标签)
# 3. 运行前查看代码
cat backend/requirements.txt # 检查依赖
cat package.json # 检查 npm 依赖
cat docker-compose.yml # 检查容器配置
# 4. 安装依赖
cd backend && pip install -r requirements.txt
cd ../frontend && npm install
# 5. 运行(详细步骤见方法二)
在项目根目录创建 .env:
# 最小配置 - 无需外部服务
DATABASE_URL=sqlite+aiosqlite:///./data/swarm.db
REDIS_URL=redis://localhost:6379
SECRET_KEY=your-random-secret-key-here
DEBUG=true
如果您想使用大模型智能体(OpenAI、Anthropic),添加:
# 可选 - 仅在使用大模型智能体时需要
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
⚠️ 安全提示:
.env 文件(不要提交到 git)# 使用 Docker
docker-compose up -d
# 或手动启动
redis-server &
cd backend && uvicorn app.main:app --reload &
cd frontend && npm run dev &
通过 Web UI:
{"model": "gpt-4"}(如果使用大模型)通过 API:
curl -X POST http://localhost:8000/api/agents \
-H "Content-Type: application/json" \
-d '{
"name": "我的助手",
"type": "llm",
"config": {"model": "gpt-4"}
}'
curl -X POST http://localhost:8000/api/tasks \
-H "Content-Type: application/json" \
-d '{
"title": "分析数据",
"description": "处理销售报告",
"priority": "high"
}'
Web 仪表板: http://localhost:3000
API:
# 系统状态
curl http://localhost:8000/api/orchestrator/status
# 列出智能体
curl http://localhost:8000/api/agents
# 列出任务
curl http://localhost:8000/api/tasks
使用外部大模型 API(OpenAI、Anthropic)。
{
"name": "GPT-4 助手",
"type": "llm",
"config": {
"model": "gpt-4",
"temperature": 0.7
}
}
需要: OPENAI_API_KEY 或 ANTHROPIC_API_KEY
执行本地函数/脚本。
{
"name": "数据处理器",
"type": "tool",
"config": {
"script_path": "./tools/process_data.py"
}
}
无需外部服务。
人在回路工作流。
{
"name": "管理员审批",
"type": "human",
"config": {
"notification": "email"
}
}
无需外部服务。
您定义行为。
from swarm_orchestrator.base import BaseAgent
class MyCustomAgent(BaseAgent):
async def execute(self, task):
# 您的自定义逻辑
return result
无需外部服务。
所有日志本地存储:
# 应用日志
tail -f logs/swarm.log
# Docker 日志(如果使用 Docker)
docker-compose logs -f
.env 文件中(不在代码中)chmod 600 .env.env 添加到 .gitignore./data/chmod 600 data/swarm.dbcp data/swarm.db backups/# 检查 Python 版本
python --version # 必须是 3.11+
# 检查 Redis
redis-cli ping # 必须返回 PONG
# 检查日志
tail -f logs/swarm.log
# 检查 Node 版本
node --version # 必须是 18+
# 清除缓存
cd frontend
rm -rf node_modules
npm install
# 检查端口
lsof -i :8000 # 后端
lsof -i :3000 # 前端
lsof -i :6379 # Redis
# 如需终止进程
kill -9 <PID>
使用本技能前:
acae6e5requirements.txt 和 package.json版本: 0.1.0 状态: Alpha - 活跃开发中 本地优先: ✅ 所有核心功能离线工作 隐私: ✅ 数据不离开您的机器(可选的大模型调用除外)
Status: 🟢 Local-First AI Agent Orchestration Platform Type: Self-hosted, no external dependencies required Privacy: 100% local processing (except optional LLM API calls)
./data/swarm.db (local only)./logs/ (local only)OpenClaw Swarm Orchestrator is a local-first platform for building and managing multi-agent AI systems. Think of it as a "control tower" for coordinating multiple AI agents working together.
┌─────────────────┐
│ Web Dashboard │ http://localhost:3000
│ (React UI) │
└────────┬────────┘
│
┌────────▼────────┐
│ FastAPI Server │ http://localhost:8000
│ (Backend API) │
└────────┬────────┘
│
┌────────▼────────┐
│ Local Storage │
│ • SQLite DB │ ./data/swarm.db
│ • Redis Cache │ localhost:6379
│ • Log Files │ ./logs/*.log
└─────────────────┘
Before installing, verify you have:
# Python 3.11+
python --version
# Node.js 18+
node --version
# Redis
redis-cli ping # Should return PONG
# (Optional) Docker
docker --version
This is the safest method - everything runs in containers.
# 1. Clone repository (inspect code first!)
git clone https://github.com/ZhenRobotics/openclaw-swarm-orchestrator.git
cd openclaw-swarm-orchestrator
# 2. Review docker-compose.yml before starting
cat docker-compose.yml
# 3. Start services
docker-compose up -d
# 4. Verify
curl http://localhost:8000/health
# Should return: {"status": "healthy"}
Access:
# 1. Install via npm (after reviewing package)
npm view openclaw-swarm-orchestrator # Review before installing
npm install -g openclaw-swarm-orchestrator
# 2. Verify installation
swarm-orchestrator --version
# 3. Start services (in separate terminals)
# Terminal 1: Start Redis
redis-server
# Terminal 2: Start backend
cd backend
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
uvicorn app.main:app --reload
# Terminal 3: Start frontend
cd frontend
npm install
npm run dev
# 1. Clone and inspect
git clone https://github.com/ZhenRobotics/openclaw-swarm-orchestrator.git
cd openclaw-swarm-orchestrator
# 2. Verify commit hash (security check)
git log -1 --format="%H"
# Should be: acae6e5... (or latest release tag)
# 3. Review code before running
cat backend/requirements.txt # Check dependencies
cat package.json # Check npm deps
cat docker-compose.yml # Check container config
# 4. Install dependencies
cd backend && pip install -r requirements.txt
cd ../frontend && npm install
# 5. Run (see Method 2 for detailed steps)
Create .env in project root:
# Minimal config - no external services needed
DATABASE_URL=sqlite+aiosqlite:///./data/swarm.db
REDIS_URL=redis://localhost:6379
SECRET_KEY=your-random-secret-key-here
DEBUG=true
If you want to use LLM agents (OpenAI, Anthropic), add:
# Optional - only if using LLM agents
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
⚠️ Security Note:
.env file securely (not in git)# Using Docker
docker-compose up -d
# Or manually
redis-server &
cd backend && uvicorn app.main:app --reload &
cd frontend && npm run dev &
Via Web UI:
{"model": "gpt-4"} (if using LLM)Via API:
curl -X POST http://localhost:8000/api/agents \
-H "Content-Type: application/json" \
-d '{
"name": "My Assistant",
"type": "llm",
"config": {"model": "gpt-4"}
}'
curl -X POST http://localhost:8000/api/tasks \
-H "Content-Type: application/json" \
-d '{
"title": "Analyze data",
"description": "Process the sales report",
"priority": "high"
}'
Web Dashboard: http://localhost:3000
API:
# System status
curl http://localhost:8000/api/orchestrator/status
# List agents
curl http://localhost:8000/api/agents
# List tasks
curl http://localhost:8000/api/tasks
Uses external LLM APIs (OpenAI, Anthropic).
{
"name": "GPT-4 Assistant",
"type": "llm",
"config": {
"model": "gpt-4",
"temperature": 0.7
}
}
Required: OPENAI_API_KEY or ANTHROPIC_API_KEY
Executes local functions/scripts.
{
"name": "Data Processor",
"type": "tool",
"config": {
"script_path": "./tools/process_data.py"
}
}
No external services needed.
Human-in-the-loop workflows.
{
"name": "Manager Approval",
"type": "human",
"config": {
"notification": "email"
}
}
No external services needed.
You define the behavior.
from swarm_orchestrator.base import BaseAgent
class MyCustomAgent(BaseAgent):
async def execute(self, task):
# Your custom logic
return result
No external services needed.
Access at http://localhost:3000:
All logs stored locally:
# Application logs
tail -f logs/swarm.log
# Docker logs (if using Docker)
docker-compose logs -f
.env file (not in code)chmod 600 .env.env to .gitignore./data/chmod 600 data/swarm.dbcp data/swarm.db backups/# Check Python version
python --version # Must be 3.11+
# Check Redis
redis-cli ping # Must return PONG
# Check logs
tail -f logs/swarm.log
# Check Node version
node --version # Must be 18+
# Clear cache
cd frontend
rm -rf node_modules
npm install
# Check ports
lsof -i :8000 # Backend
lsof -i :3000 # Frontend
lsof -i :6379 # Redis
# Kill processes if needed
kill -9 <PID>
Before using this skill:
acae6e5requirements.txt and package.jsonVersion: 0.1.0 Status: Alpha - Active Development Local-First: ✅ All core features work offline Privacy: ✅ No data leaves your machine (except optional LLM calls)
Built with privacy and transparency in mind. Inspect the code before you trust it.