Install
openclaw skills install token-saver-75plusAutomatically classifies requests to optimize cost by routing to the cheapest capable model and applies maximum output compression for 75%+ token savings.
openclaw skills install token-saver-75plusUnderstand fully, execute cheaply. The orchestrator must fully understand the task before routing. Never sacrifice comprehension for speed.
| Tier | Pattern | Orchestrator | Executor |
|---|---|---|---|
| T1 | yes/no, status, trivial facts, quick lookups | Handle alone | — |
| T2 | summaries, how-to, lists, bulk processing, formatting | Handle alone OR spawn Groq | Groq (FREE) |
| T3 | debugging, multi-step, code generation, structured analysis | Orchestrate + spawn | Codex for code, Groq for bulk |
| T4 | strategy, complex decisions, multi-agent coordination, creative | Spawn Opus | Opus orchestrates, spawns Codex/Groq from within |
| Model | Use For | Cost | Spawn with |
|---|---|---|---|
groq/llama-3.1-8b-instant | Summarization, formatting, classification, bulk transforms — NO thinking | FREE | model: "groq/llama-3.1-8b-instant" |
openai/gpt-5.3-codex | ALL code generation, code review, refactoring | $$$ | model: "openai/gpt-5.3-codex" |
openai/gpt-5.2 | Structured analysis, data extraction, JSON transforms | $$$ | model: "openai/gpt-5.2" |
anthropic/claude-opus-4-6 | Strategy, complex orchestration, failure recovery (T4 only) | $$$$ | model: "anthropic/claude-opus-4-6" |
Groq (free bulk work):
sessions_spawn(
task: "<clear instruction with all context included>",
model: "groq/llama-3.1-8b-instant"
)
Codex (all code):
sessions_spawn(
task: "Write <language> code that <detailed spec>. Include comments. Output the complete file.",
model: "openai/gpt-5.3-codex"
)
Opus (T4 strategy):
sessions_spawn(
task: "<full context + goal>. You have full tool access. Use sessions_spawn with Codex for code and Groq for bulk subtasks.",
model: "anthropic/claude-opus-4-6"
)
| Tier | Max output |
|---|---|
| T1 | 1-3 lines |
| T2 | 5-15 bullets |
| T3 | Structured sections, <400 words |
| T4 | Longer allowed, still dense |
Append: [~X tokens | Tier: Tn | Route: model(s) used]