LLM Supervisor
PassAudited by ClawScan on May 1, 2026.
Overview
This skill does what it claims—switching agent LLM profiles on rate limits—but users should understand it can automatically change which model/provider agents use and persists that mode.
This appears safe for users who want automatic cloud-to-local LLM fallback. Before installing, be comfortable with a skill that can change agent LLM profiles, persists the active mode, and sends local-mode work to your local Ollama service.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Agents may automatically use a different cloud profile or a local model, which can affect quality, latency, cost, and behavior.
The skill can change the LLM profile used by agents. This matches the stated purpose, but it is still meaningful control over agent behavior.
event.agent.setLLMProfile("anthropic:default"); ... event.agent.setLLMProfile({ provider: "ollama", model: localModel, baseUrl: "http://127.0.0.1:11434" });Install only if you want automated LLM switching, and use /llm status or /llm switch cloud/local to monitor or change the active mode.
A message that quotes or discusses the confirmation phrase could be treated as approval for local-model code tasks.
The confirmation control checks whether the phrase appears anywhere in the last user message, rather than requiring an exact standalone confirmation.
const confirmed = lastUserMessage.toUpperCase().includes(confirmationPhrase);
Only type the confirmation phrase when you intentionally want local-model code generation; maintainers should consider an exact-match or explicit approval mechanism.
Once switched to local mode, later agents may continue using local mode until the state is changed back.
The skill persists its active mode and last error in OpenClaw state so future agents can inherit the current LLM mode.
ctx.state.set("llm-supervisor:state", state);Check /llm status after rate-limit events and switch back to cloud mode when desired.
Your prompts may be processed by whatever Ollama-compatible service is listening on the local machine.
In local mode, agent prompts and tasks are directed to a local Ollama provider endpoint. This is disclosed and purpose-aligned.
baseUrl: "http://127.0.0.1:11434"
Ensure your local Ollama service is installed, trusted, and configured as expected before relying on local mode.
