Install
openclaw skills install fleet-doctrineModel routing strategy for multi-model AI fleet. Use when spawning sub-agents, choosing models for cron jobs, delegating coding tasks, or deciding which model should handle a task. Covers Opus, Codex, Sonnet, Gemini, and Grok routing rules.
openclaw skills install fleet-doctrineopus → anthropic/claude-opus-4-6sonnet → anthropic/claude-sonnet-4-6codex → openai-codex/gpt-5.3-codexgoogle/gemini-3-pro-previewxai/grok-4When: Main session, orchestration, security decisions, financial tasks, reviewing other models' output, anything high-stakes or ambiguous. Never waste on: Routine crons, simple lookups, email summaries, templated tasks.
When: Big coding tasks (refactors, new features, full repo work), PR reviews, debugging complex issues, checking other models' code output.
Spawn as: sessions_spawn(model: "codex", task: "...") or sub-agent with --model codex.
Pairs with: Grok for parallel work or second opinions on code.
When: Cron jobs, email briefings, admissions reports, routine admin, quick lookups, drafts, form letters, anything repetitive or templated. Default for: All crons unless the task requires reasoning.
When: Image generation, analysing long documents (1M context), visual tasks, when a different perspective helps. Best at: Processing massive context windows, multimodal input.
When: Parallel work alongside Codex, speed-over-depth tasks, sanity-checking other models' output, when you need a quick second opinion. Good for: Lightweight code reviews, fast research, draft generation.
If a model is unavailable on your instance, fall back to your default model (typically Sonnet). The doctrine describes intent — use the best model you have access to that matches the category. When in doubt, do the task with what you've got rather than failing.