Multi Model Orchestrator
PassAudited by VirusTotal on May 5, 2026.
Overview
Type: OpenClaw Skill Name: multi-model-orchestrator Version: 2.0.0 The multi-model-orchestrator is a comprehensive framework for managing AI agent workflows, including debugging, frontend design, and parallel task execution. While it references non-existent model versions (e.g., GPT-5.5 in agents/team.json) and third-party API routing strings (sub2api-openai), these appear to be internal naming conventions or placeholders for an orchestration layer. The instructions in SKILL.md and the workflow files are focused on improving code quality and systematic debugging (e.g., the 'Iron Laws' of debugging) rather than performing unauthorized actions or data exfiltration.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The agent could modify code and install packages in the local project without stopping for explicit user review, which may break builds, add unwanted dependencies, or introduce supply-chain risk.
This workflow tells the agent to automatically repair problems and install missing dependencies. For a coding skill this is related to the purpose, but package installation and automatic fixes are high-impact actions when no approval gate, command preview, scope, or rollback limit is stated.
循环执行: ... 3. 如果有问题 → 自动修复 ... ## 自动修复策略 ... 3. 依赖缺失 → 自动安装
Require confirmation before installing dependencies or applying fixes, show the exact commands and diffs first, restrict actions to the intended repository, and document rollback steps.
Private code or task context may be sent to whichever configured model/provider the workflow selects.
The skill explicitly routes tasks to multiple model providers or model identities. This is central to the skill's purpose, but it means task details, code, or other context may be shared across those models.
根据任务类型自动选择最优模型: ... sub2api-openai/gpt-5.5 ... mimo/mimo-v2.5-pro ... local-qwen/gpt-4o
Use the skill only with code and data that may be shared with the configured providers, and ask the agent to minimize context sent to subagents or external models.
The agent may continue iterating, spawning sub-workflows, or consuming model/API budget until it decides the task is complete.
The artifacts disclose persistent and fully automatic execution loops. This is part of the stated orchestration design, but long-running autonomous loops should be bounded by user-approved stopping conditions.
$ralph(持久完成) ... 持续执行直到完成 ... $autopilot(全自动) ... → 修复反馈循环
Set maximum iteration, time, cost, and file-change limits before using $ralph or $autopilot, and require checkpoints before major changes.
