LLM Supervisor
PassAudited by VirusTotal on May 12, 2026.
Overview
Type: OpenClaw Skill Name: Developer: Version: Description: OpenClaw Agent Skill Suspicious High-Entropy/Eval files: 12 The OpenClaw LLM Supervisor skill is designed to automatically switch between cloud and local LLM models (Ollama) when cloud rate limits are encountered. It includes a safety mechanism requiring explicit user confirmation (`CONFIRM LOCAL CODE`) before executing code-related tasks with a local LLM. The code primarily interacts with the OpenClaw SDK for state management, agent LLM profile configuration, and user notifications. There is no evidence of data exfiltration, malicious execution, persistence mechanisms, or prompt injection attempts in the code or documentation. The permissions requested (`llm`, `agents`, `notifications`, `state`) are appropriate for its stated functionality, and the local Ollama `baseUrl` is correctly set to `http://127.0.0.1:11434` in `hooks/onAgentStart.ts`.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Agents may automatically use a different cloud profile or a local model, which can affect quality, latency, cost, and behavior.
The skill can change the LLM profile used by agents. This matches the stated purpose, but it is still meaningful control over agent behavior.
event.agent.setLLMProfile("anthropic:default"); ... event.agent.setLLMProfile({ provider: "ollama", model: localModel, baseUrl: "http://127.0.0.1:11434" });Install only if you want automated LLM switching, and use /llm status or /llm switch cloud/local to monitor or change the active mode.
A message that quotes or discusses the confirmation phrase could be treated as approval for local-model code tasks.
The confirmation control checks whether the phrase appears anywhere in the last user message, rather than requiring an exact standalone confirmation.
const confirmed = lastUserMessage.toUpperCase().includes(confirmationPhrase);
Only type the confirmation phrase when you intentionally want local-model code generation; maintainers should consider an exact-match or explicit approval mechanism.
Once switched to local mode, later agents may continue using local mode until the state is changed back.
The skill persists its active mode and last error in OpenClaw state so future agents can inherit the current LLM mode.
ctx.state.set("llm-supervisor:state", state);Check /llm status after rate-limit events and switch back to cloud mode when desired.
Your prompts may be processed by whatever Ollama-compatible service is listening on the local machine.
In local mode, agent prompts and tasks are directed to a local Ollama provider endpoint. This is disclosed and purpose-aligned.
baseUrl: "http://127.0.0.1:11434"
Ensure your local Ollama service is installed, trusted, and configured as expected before relying on local mode.
