LLM Supervisor

v0.3.0

Graceful rate limit handling with Ollama fallback. Notifies on rate limits, offers local model switch with confirmation for code tasks.

3· 2.7k·10 current·13 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name/description align with the implementation: it listens for LLM errors, switches the agent to a local Ollama model (baseUrl http://127.0.0.1:11434), and blocks code tasks until confirmation. It does not request external credentials or unusual binaries. Note: onAgentStart forces 'anthropic:default' when in cloud mode which may override user-selected cloud providers unexpectedly.
!
Instruction Scope
SKILL.md and README describe offering a fallback and asking the user; the implementation actually auto-switches to local when a rate-limit error is detected and then notifies users. Code tasks are blocked until the user confirms, but the auto-switch happens immediately for non-code work. The confirmation check looks for ctx.config.confirmationPhrase inside the last user message (case-insensitive substring match), which may be brittle (false positives/negatives) and can be bypassed or accidentally satisfied. If confirmationPhrase is not configured, the current code will always treat it as not confirmed, effectively blocking local code tasks indefinitely.
Install Mechanism
Instruction-only + bundled source; no install spec, no downloads, and no external package installation. Risk from install mechanism is low.
Credentials
The skill requires no environment variables, no credentials, and no special config paths. The requested capabilities (changing agent LLM profile, storing a small state) are proportionate to its stated purpose.
Persistence & Privilege
The skill registers hooks that alter new agents' LLM profiles and can block task execution — this is expected for a supervisor-type skill. always:false (not force-included) is appropriate. Be aware that it broadcasts notifications via ctx.notify.all when switching modes (may notify all users).
What to consider before installing
This skill is not obviously malicious, but review and test it before enabling in production: - Behavior mismatch: SKILL.md/README give the impression of 'ask before switching', but the code immediately switches to local on rate-limit events and only blocks code tasks. If you need an explicit ask-before-switch flow for non-code tasks, modify the onLLMError handler. - Configuration inconsistencies: The code checks ctx.config.confirmationPhrase but skill.json lists requireConfirmationForCode; there's no guaranteed default confirmationPhrase. Without a configured confirmationPhrase, local code tasks will be blocked until you set one. Verify and set these config keys (e.g., localModel, confirmationPhrase, requireConfirmationForCode) as desired. - Confirmation detection is a simple substring match of the last user message (case-insensitive). That can produce false positives or be accidentally triggered; consider a stricter confirmation mechanism if needed. - The skill forces 'anthropic:default' in cloud mode, which may override your intended cloud provider/profile. If you use different cloud profiles, adjust onAgentStart. - The skill points local Ollama at http://127.0.0.1:11434 — ensure the local Ollama server is trusted and secured. Recommended actions before install: inspect/adjust config defaults, test in a non-production workspace, and consider adding explicit user prompts for switching non-code workloads if you want user consent prior to any auto-switch.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ef2fd33wagemgrfh9ftd9e580mr6h

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments