LLM Supervisor

PassAudited by ClawScan on May 1, 2026.

Overview

This skill does what it claims—switching agent LLM profiles on rate limits—but users should understand it can automatically change which model/provider agents use and persists that mode.

This appears safe for users who want automatic cloud-to-local LLM fallback. Before installing, be comfortable with a skill that can change agent LLM profiles, persists the active mode, and sends local-mode work to your local Ollama service.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Agents may automatically use a different cloud profile or a local model, which can affect quality, latency, cost, and behavior.

Why it was flagged

The skill can change the LLM profile used by agents. This matches the stated purpose, but it is still meaningful control over agent behavior.

Skill content
event.agent.setLLMProfile("anthropic:default"); ... event.agent.setLLMProfile({ provider: "ollama", model: localModel, baseUrl: "http://127.0.0.1:11434" });
Recommendation

Install only if you want automated LLM switching, and use /llm status or /llm switch cloud/local to monitor or change the active mode.

What this means

A message that quotes or discusses the confirmation phrase could be treated as approval for local-model code tasks.

Why it was flagged

The confirmation control checks whether the phrase appears anywhere in the last user message, rather than requiring an exact standalone confirmation.

Skill content
const confirmed = lastUserMessage.toUpperCase().includes(confirmationPhrase);
Recommendation

Only type the confirmation phrase when you intentionally want local-model code generation; maintainers should consider an exact-match or explicit approval mechanism.

What this means

Once switched to local mode, later agents may continue using local mode until the state is changed back.

Why it was flagged

The skill persists its active mode and last error in OpenClaw state so future agents can inherit the current LLM mode.

Skill content
ctx.state.set("llm-supervisor:state", state);
Recommendation

Check /llm status after rate-limit events and switch back to cloud mode when desired.

What this means

Your prompts may be processed by whatever Ollama-compatible service is listening on the local machine.

Why it was flagged

In local mode, agent prompts and tasks are directed to a local Ollama provider endpoint. This is disclosed and purpose-aligned.

Skill content
baseUrl: "http://127.0.0.1:11434"
Recommendation

Ensure your local Ollama service is installed, trusted, and configured as expected before relying on local mode.