Model Router
Model Router - Parallel multi-LLM invocation with result merging. Use when: need better answers, compare model outputs, or get best result from multiple LLMs...
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 0 · 28 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name/description line up with the code: model_router implements parallel invocation and merging. It requires only python3 which is proportionate. However, the code contains a mapping of real provider endpoints (OpenAI, Anthropic, Google, Baidu, Ali, etc.) even though the implementation currently simulates calls. That mapping is plausible for the stated purpose but suggests the skill is intended to call external APIs once implemented.
Instruction Scope
SKILL.md instructs use of a 'Router' class in examples ('from model_router import Router') but the module actually exposes ModelRouter and route(); this mismatch is likely to confuse users. The README examples also reference some model names (e.g., 'cursor', 'windsurf', 'codeium') not present in the LLM enum or endpoint map. The SKILL.md does not instruct the agent or user to provide API keys or where to put them, yet the ModelRouter constructor accepts an api_keys dict — incomplete instructions could lead to accidental misuse or surprise network calls if the module is extended.
Install Mechanism
This is an instruction-only skill with a bundled Python file; there is no install spec that downloads remote archives or executes installers. SKILL.md suggests installing via 'npx clawhub install model-router-waai' (external tooling) but there is no automatic installer embedded in the package. No high-risk download URLs or extract operations are present.
Credentials
The skill declares no required environment variables or credentials, which matches the shipped code (the code simulates calls and does not read env vars). However, given the presence of real API endpoint mappings and the ModelRouter.api_keys parameter, in real use the skill will need provider API keys; SKILL.md gives no guidance about which credentials to supply or how. This lack of documented credential handling is a usability/security concern (users may accidentally provide keys in insecure ways).
Persistence & Privilege
The skill does not request always:true, does not modify system-wide settings, and does not claim to persist credentials or change other skills' configurations. It runs as a simple module and a small CLI demo; no elevated privileges are requested.
What to consider before installing
This package appears to implement the stated feature (parallel calls and merge) but the documentation and code are inconsistent and incomplete. Before installing or using it with real API keys:
- Run it in a sandbox without any provider credentials to verify behavior (the current code simulates API calls).
- Inspect future versions for added network requests — the file already lists many provider endpoints, and a later change could start making real HTTP calls.
- Don't hand over API keys until you confirm how the module expects them (ModelRouter accepts an api_keys dict; SKILL.md doesn't document env vars or config paths). Prefer passing keys explicitly at runtime and store them in secure secrets storage, not in plaintext files.
- Expect to fix small documentation mismatches (Router vs ModelRouter) before depending on it in production.
If you need a definitive safety judgement (benign vs malicious), ask the author for clarification about credential handling and intended networking behavior or request a version that explicitly performs authenticated calls with clear credential configuration.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.1.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🔀 Clawdis
Binspython3
SKILL.md
🔀 Model Router / 模型路由器
Parallel multi-LLM invocation with intelligent result merging. Get the best from multiple models.
When to Use / 使用场景
| EN | CN |
|---|---|
| Need better answers than single model | 需要比单一模型更好的答案 |
| Compare outputs from different LLMs | 对比不同模型输出 |
| Get best result through ensemble | 通过集成获取最佳结果 |
| Critical tasks requiring reliability | 需要可靠性的关键任务 |
Workflow / 工作流
User Task / 用户任务
│
▼
┌──────────────────┐
│ Parallel Invoke │ / 并行调用
│ ┌────┬────┬────┐│
│ │GPT4│Claude│Kimi││
│ └────┴────┴────┘│
└──────────────────┘
│
▼
┌──────────────────┐
│ Target Merge │ / 目标模型合并
│ Target LLM │
└──────────────────┘
│
▼
Final Result / 最终结果
Features / 功能
| Feature | EN | CN |
|---|---|---|
| Parallel invocation | 并行调用多模型 | |
| Auto-routing | 自动路由 | |
| Result merging | 结果智能合并 | |
| Quality assessment | 质量评估 |
Supported Models / 支持模型
| Model | Provider |
|---|---|
| gpt4 | OpenAI |
| claude | Anthropic |
| kimi | 月之暗面 |
| deepseek | 深度求索 |
| qwen | 阿里 |
| ernie | 百度 |
| gemini |
Usage / 使用
from model_router import Router
# Simple / 简单
result = await Router().run(
task="写一首关于春天的诗",
models=["gpt4", "claude", "kimi"],
merge_model="gpt4"
)
# Advanced / 高级
result = await Router().run(
task="分析这段代码",
models=["cursor", "windsurf", "codeium"],
merge_model="claude",
timeout=30,
merge_prompt="合并以下代码审查结果,给出最佳建议"
)
Response / 返回结果
{
"final": "...", # 合并后的最终结果
"sources": [...], # 各模型原始结果
"merge_model": "claude", # 合并用的模型
"total_time": 2.5 # 总耗时(秒)
}
Installation / 安装
npx clawhub install model-router-waai
Author / 作者
- WaaiOn
Files
2 totalSelect a file
Select a file to preview.
Comments
Loading comments…
