Install
openclaw skills install hive-task-routerIntelligent task routing system that identifies task types (web/code/data/doc/chat) and routes to optimal models with appropriate execution mode (subagent/ma...
openclaw skills install hive-task-routerSmart task distribution system for OpenClaw — Automatically routes tasks to optimal models and execution modes based on task type recognition.
Universal Model Support — Works with any AI model provider (Bailian, OpenAI, Anthropic, etc.) via environment variable configuration.
Use this skill automatically when the user's message contains ANY of these patterns:
| Trigger Pattern | Keywords | Action |
|---|---|---|
| Code tasks | 代码、编程、脚本、函数、nodejs、react、vue、typescript、javascript、html、css、前端、后端、api、接口、调试、bug、优化、重构 | Run router.sh → Execute with code model + subagent |
| Web research | 搜索、查找、调研、研究、github、项目、趋势、报告、分析、对比、评测、最新、2026、新闻、动态 | Run router.sh → Execute with web model + subagent |
| Data tasks | 数据、分析、统计、图表、可视化、excel、csv、json、处理、转换 | Run router.sh → Execute with data model + subagent |
| Documentation | 文档、说明、教程、指南、手册、readme、wiki、注释、文档化 | Run router.sh → Execute with doc model + subagent |
| Batch tasks | 多个、批量、同时、并行、一起 | Run router.sh for each → Execute in parallel |
Do NOT use this skill when:
Users can explicitly trigger this skill by:
router.sh "task description" directlyReceive user message
↓
Contains specific trigger keywords? (code/web/data/doc/batch)
↓
YES → Run router.sh to analyze
↓
Get recommended model + execution mode
↓
Execute with recommended configuration
↓
Report result to user
↓
NO → Check for vague task keywords? (任务、帮忙、处理、搞定、完成)
↓
YES → Ask clarifying question (see "Vague Task Handling")
↓
User clarifies → Re-analyze with new info
↓
NO → Handle directly (no routing needed)
When user message is vague (e.g., "做个任务", "帮忙处理一下", "搞定这件事"):
Step 1: Acknowledge and ask
好的主人,请问是什么类型的任务?
💻 写代码/脚本?
- 例如:"写个 Python 脚本"、"开发一个 API"
🔍 搜索调研?
- 例如:"搜索最新趋势"、"调研竞品"
📊 数据处理?
- 例如:"分析 Excel 数据"、"转换 JSON 格式"
📄 写文档?
- 例如:"写 API 文档"、"编写教程"
💬 还是只是聊天?
- 例如:"今天有什么安排"、"帮我总结一下"
或者您直接告诉我具体内容,我来判断!
Step 2: User clarifies
User: "写个脚本处理数据"
↓
Now contains: "脚本" (code) + "数据" (data)
↓
Priority: code > data
↓
Execute with: qwen3-coder-plus + subagent
Vague Keywords (trigger clarification):
Specific Keywords (trigger automatic routing):
Ideal scenarios:
| Type | Keywords (Chinese) | Keywords (English) | Priority |
|---|---|---|---|
| web 🔍 | 搜索、查找、调研、研究、github、项目、趋势、报告、分析、对比、评测、最新、2026、新闻、动态 | search, research, github, project, trend, report, analysis, comparison, latest, news | 1 (Highest) |
| code 💻 | 代码、编程、脚本、函数、nodejs、react、vue、typescript、javascript、html、css、前端、后端、api、接口、调试、bug、优化、重构 | code, programming, script, function, nodejs, react, vue, typescript, javascript, html, css, frontend, backend, api, debug, bug, optimize, refactor | 2 |
| data 📊 | 数据、分析、统计、图表、可视化、excel、csv、json、处理、转换 | data, analysis, statistics, chart, visualization, excel, csv, json, processing, conversion | 3 |
| doc 📄 | 文档、说明、教程、指南、手册、readme、wiki、注释、文档化 | documentation, guide, tutorial, manual, readme, wiki, comment, document | 4 |
| chat 💬 | 你好、谢谢、再见、今天、明天、安排、计划、汇报、总结、提醒、备忘 | hello, thanks, goodbye, today, tomorrow, plan, schedule, summary, reminder, memo | 5 (Default) |
Note: Model IDs are configurable via environment variables. Replace provider/ with your actual model provider (e.g., bailian/, openai/, anthropic/).
| Task Type | Default Model | Environment Variable | Reason |
|---|---|---|---|
| code | provider/qwen3-coder-plus | HIVE_MODEL_CODE | Specialized in code generation and debugging |
| web | provider/qwen3-max | HIVE_MODEL_WEB | Strong search and reasoning capabilities |
| data | provider/qwen3-coder-plus | HIVE_MODEL_DATA | Code-based data processing |
| doc | provider/qwen3.5-plus | HIVE_MODEL_DOC | Good text generation, cost-effective |
| chat | provider/qwen3.5-plus | HIVE_MODEL_CHAT | Best for casual conversation, cost-effective |
Bailian (通义千问):
export HIVE_MODEL_CODE="bailian/qwen3-coder-plus"
export HIVE_MODEL_WEB="bailian/qwen3-max-2026-01-23"
export HIVE_MODEL_CHAT="bailian/qwen3.5-plus"
export HIVE_MODEL_DOC="bailian/qwen3.5-plus"
export HIVE_MODEL_DATA="bailian/qwen3-coder-plus"
Automatic Model Detection (Recommended):
# Auto-detect available models from OpenClaw
export HIVE_VALIDATE_MODEL=auto
First run: Detects models and caches configuration
Subsequent runs: Uses cached config (24h TTL)
Benefit: No manual configuration needed!
Manual Validation Modes:
| Mode | Environment Variable | Behavior | Use Case |
|---|---|---|---|
| Auto (Recommended) | export HIVE_VALIDATE_MODEL=auto | Auto-detect + cache 24h | Best for most users |
| Cache | export HIVE_VALIDATE_MODEL=cache | Validate once, cache 24h | Manual config, stable |
| Always | export HIVE_VALIDATE_MODEL=1 | Validate every execution | Debugging, changes |
| Never | export HIVE_VALIDATE_MODEL=0 | Skip validation | Production, known config |
Cache Configuration:
# Cache directory (default: ~/.hive-task-router)
export HIVE_CACHE_DIR="$HOME/.hive-task-router"
# Cache TTL in seconds (default: 86400 = 24 hours)
export HIVE_CACHE_TTL=86400
Validation Behavior:
provider/ placeholderOpenAI:
export HIVE_MODEL_CODE="openai/gpt-4"
export HIVE_MODEL_WEB="openai/gpt-4-turbo"
export HIVE_MODEL_CHAT="openai/gpt-3.5-turbo"
export HIVE_MODEL_DOC="openai/gpt-3.5-turbo"
export HIVE_MODEL_DATA="openai/gpt-4"
Anthropic (Claude):
export HIVE_MODEL_CODE="anthropic/claude-3-5-sonnet"
export HIVE_MODEL_WEB="anthropic/claude-3-opus"
export HIVE_MODEL_CHAT="anthropic/claude-3-haiku"
export HIVE_MODEL_DOC="anthropic/claude-3-haiku"
export HIVE_MODEL_DATA="anthropic/claude-3-5-sonnet"
Mixed Providers:
# Use best model for each task type
export HIVE_MODEL_CODE="anthropic/claude-3-5-sonnet" # Best for code
export HIVE_MODEL_WEB="openai/gpt-4-turbo" # Best for search
export HIVE_MODEL_CHAT="bailian/qwen3.5-plus" # Cost-effective
| Task Type | Execution Mode | Reason |
|---|---|---|
| chat | Main Session | Quick response, no need for isolation |
| code/web/data/doc | Subagent | Long-running tasks, parallel execution, session isolation |
Priority Rule: When multiple keywords match, use the highest priority type (web > code > data > doc > chat).
When installed as an OpenClaw Skill, the agent will automatically use this skill when:
Agent Decision Example:
User: "帮我写一个 Python 脚本处理 Excel 数据"
↓
Agent checks: Contains "脚本" (code) + "数据" (data)
↓
Priority: code > data
↓
Agent executes:
openclaw sessions spawn \
--mode run \
--runtime subagent \
--model bailian/qwen3-coder-plus \
--task "帮我写一个 Python 脚本处理 Excel 数据"
Configuration for Agents:
Add to agent's AGENTS.md or SOUL.md:
## Hive Task Router Integration
When receiving tasks:
1. Check if message contains task keywords (see SKILL.md)
2. If yes → Use hive-task-router skill
3. If no → Handle directly
The router script automatically analyzes tasks and outputs recommended execution commands.
# Basic usage
bash router.sh "帮我写一个 Node.js 脚本"
# Analyze research task
bash router.sh "搜索 2026 年最新的前端趋势"
# Analyze data task
bash router.sh "分析这个 JSON 数据并生成图表"
Output format:
================================
蜂巢智能任务分发系统 - 路由分析
================================
任务描述:帮我写一个 Node.js 脚本
任务类型:code
推荐模型:bailian/qwen3-coder-plus
执行方式:subagent
📦 代码任务 - 使用 qwen3-coder-plus 模型
适合:Node.js、前端代码、脚本编写
================================
推荐执行命令:
================================
openclaw sessions spawn \
--mode run \
--runtime subagent \
--model bailian/qwen3-coder-plus \
--task "帮我写一个 Node.js 脚本"
openclaw sessions spawn \
--mode run \
--runtime subagent \
--model bailian/qwen3-coder-plus \
--task "帮我写一个 Express API 服务"
openclaw sessions spawn \
--mode run \
--runtime subagent \
--model bailian/qwen3-max-2026-01-23 \
--task "调研 5 个 React UI 库"
openclaw agent \
--session-id agent:main:chat \
--model bailian/qwen3.5-plus \
--message "今天有什么安排"
openclaw sessions spawn \
--mode run \
--runtime subagent \
--model bailian/qwen3-coder-plus \
--task "处理这个 CSV 文件并生成统计报告"
openclaw sessions spawn \
--mode run \
--runtime subagent \
--model bailian/qwen3.5-plus \
--task "为这个项目编写 README 文档"
For batch tasks, use parallel subagents:
# Spawn multiple subagents concurrently
openclaw sessions spawn --mode run --runtime subagent --model bailian/qwen3-max-2026-01-23 --task "调研项目 A" &
openclaw sessions spawn --mode run --runtime subagent --model bailian/qwen3-max-2026-01-23 --task "调研项目 B" &
openclaw sessions spawn --mode run --runtime subagent --model bailian/qwen3-max-2026-01-23 --task "调研项目 C" &
# Wait for all to complete
wait
# Then collect and summarize results
User Input:
帮我写一个 Node.js 文件处理脚本,支持读取 CSV 和 JSON 格式
Router Analysis:
Execution Command:
openclaw sessions spawn \
--mode run \
--runtime subagent \
--model bailian/qwen3-coder-plus \
--task "帮我写一个 Node.js 文件处理脚本,支持读取 CSV 和 JSON 格式"
User Input:
搜索 2026 年最新的前端趋势,包括 React、Vue、Svelte 的对比
Router Analysis:
Execution Command:
openclaw sessions spawn \
--mode run \
--runtime subagent \
--model bailian/qwen3-max-2026-01-23 \
--task "搜索 2026 年最新的前端趋势,包括 React、Vue、Svelte 的对比"
User Input:
今天有什么安排?帮我总结一下昨天的工作
Router Analysis:
Execution Command:
openclaw agent \
--session-id agent:main:chat \
--model bailian/qwen3.5-plus \
--message "今天有什么安排?帮我总结一下昨天的工作"
User Input:
分析这个销售数据 Excel 文件,生成可视化图表和统计报告
Router Analysis:
Execution Command:
openclaw sessions spawn \
--mode run \
--runtime subagent \
--model bailian/qwen3-coder-plus \
--task "分析这个销售数据 Excel 文件,生成可视化图表和统计报告"
User Input:
为这个 Python 项目编写完整的 API 文档和使用教程
Router Analysis:
Execution Command:
openclaw sessions spawn \
--mode run \
--runtime subagent \
--model bailian/qwen3.5-plus \
--task "为这个 Python 项目编写完整的 API 文档和使用教程"
User Input (Multiple Tasks):
1. 写个脚本处理 JSON 数据
2. 搜索最新 AI 工具
3. 今天有什么安排
Router Analysis:
Parallel Execution:
# Task 1 & 2 run in parallel subagents
openclaw sessions spawn --mode run --runtime subagent --model bailian/qwen3-coder-plus --task "写个脚本处理 JSON 数据" &
openclaw sessions spawn --mode run --runtime subagent --model bailian/qwen3-max-2026-01-23 --task "搜索最新 AI 工具" &
# Task 3 runs in main session (non-blocking)
openclaw agent --session-id agent:main:chat --model bailian/qwen3.5-plus --message "今天有什么安排"
# Wait for subagents
wait
Performance: 3x faster than sequential execution
clawhub install qiongcao/hive-task-router
cp -r hive-task-router ~/.openclaw/workspace/skills/
chmod +x ~/.openclaw/workspace/skills/hive-task-router/router.sh
export HIVE_MODEL_CODE="bailian/qwen3-coder-plus"
export HIVE_MODEL_WEB="bailian/qwen3-max-2026-01-23"
export HIVE_MODEL_CHAT="bailian/qwen3.5-plus"
bailian/qwen3-coder-plus)bailian/qwen3-max-2026-01-23)bailian/qwen3.5-plus)Optional environment variables for customization:
# Model overrides (required)
export HIVE_MODEL_CODE="bailian/qwen3-coder-plus"
export HIVE_MODEL_WEB="bailian/qwen3-max-2026-01-23"
export HIVE_MODEL_CHAT="bailian/qwen3.5-plus"
export HIVE_MODEL_DOC="bailian/qwen3.5-plus"
export HIVE_MODEL_DATA="bailian/qwen3-coder-plus"
# Optional: custom session IDs
export HIVE_SESSION_CODE="custom:code:session"
export HIVE_SESSION_WEB="custom:web:session"
export HIVE_SESSION_CHAT="custom:chat:session"
# Optional: concurrency limit
export HIVE_MAX_CONCURRENT=10
# Check models
openclaw models list | grep bailian
# Test router script
bash router.sh "测试任务"
# Verify environment variables
echo $HIVE_MODEL_CODE
echo $HIVE_MODEL_WEB
# Make sure script is executable
chmod +x router.sh
# Run with full path
bash /path/to/router.sh "task"
# Check available models
openclaw models list
# Update environment variables with available models
export HIVE_MODEL_CODE="bailian/qwen3-coder-plus"
# Add more specific keywords to router.sh
# Edit CODE_KEYWORDS, WEB_KEYWORDS, etc.
# Verify environment variables are set
echo $HIVE_MODEL_CODE
echo $HIVE_MODEL_WEB
# Set them explicitly before running router.sh
export HIVE_MODEL_CODE="bailian/qwen3-coder-plus"
bash router.sh "task"
Task Distribution Principles
Model Selection
Concurrency Control
Environment Management
.bashrc or .zshrc| Metric | Traditional | Hive Router | Improvement |
|---|---|---|---|
| 3 project research | ~180s | ~60s | 3x ⚡ |
| Model utilization | Single model | Multi-model | Flexible |
| Task routing | Manual | Automatic | Intelligent |
| Multi-provider | Manual switching | Auto config | Seamless |
| Provider | Status | Notes |
|---|---|---|
| Bailian (通义千问) | ✅ Tested | Default configuration |
| OpenAI (GPT) | ✅ Compatible | Set HIVE_MODEL_* variables |
| Anthropic (Claude) | ✅ Compatible | Set HIVE_MODEL_* variables |
| Google (Gemini) | ✅ Compatible | Set HIVE_MODEL_* variables |
| Other OpenAI-compatible | ✅ Compatible | Use provider/ prefix |
MIT License - Feel free to use and modify.
Author: qiongcao
Version: 1.0.0
Last Updated: 2026-03-12
Universal Model Support: Yes