LLM Supervisor
v0.2.0Automatically switches between cloud and local LLMs on rate limits, with user confirmation required for local code generation.
⭐ 1· 1.5k·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The skill's name/description match the implementation: hooks detect LLM errors, switch a global mode state, set agent LLM profiles (cloud vs local), and expose /llm commands. Declared permissions (llm, agents, notifications, state) align with these actions.
Instruction Scope
Runtime behavior is narrowly scoped: it only inspects LLM error messages, agent start events, and task intents; it blocks code-related tasks in local mode until the user supplies the explicit confirmation phrase. It does not read arbitrary files, environment variables, or other agents' credentials.
Install Mechanism
No install spec or external downloads are present; the skill is packaged with its code (compiled dist files). There are no external URL downloads or extract steps. This is low-risk from an installer perspective.
Credentials
The skill requests no environment variables or external credentials. It does configure a local Ollama baseUrl (http://127.0.0.1:11434) when switching to local mode, which is appropriate for its purpose.
Persistence & Privilege
skill.json sets enabledByDefault: true, so the skill will be enabled by default; however 'always' is false and it does not request system-wide privileges beyond managing agents/llm profiles and state. Because it can change agent LLM profiles and notify all users, enable-by-default means it will affect agent behavior immediately unless you disable it.
Assessment
This skill is internally consistent with its stated purpose. Before installing: 1) Note it will set the agent LLM profile to your cloud provider ('anthropic:default') or to a local Ollama endpoint at http://127.0.0.1:11434 — ensure you run Ollama at that address or change the config. 2) It is enabled by default (skill.json enabledByDefault: true), so it may start affecting agents immediately; if you prefer control, disable it until tested. 3) It can notify all users and block code tasks in local mode until someone types the configured confirmation phrase, so review or change confirmationPhrase/localModel/cooldownMinutes if needed. 4) It does not request credentials or call external servers beyond a local Ollama URL; still review source if you require additional assurance. If you want extra caution, test in a non-production workspace first.Like a lobster shell, security has layers — review code before you run it.
latestvk972rq3stx5fjn23w6b5kr2k5x80j9cq
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
