Llmrouter

v0.1.1

Intelligent LLM proxy that routes requests to appropriate models based on complexity. Save money by using cheaper models for simple tasks. Tested with Anthropic, OpenAI, Gemini, Kimi/Moonshot, and Ollama.

6· 2.5k·8 current·8 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The skill is an LLM routing proxy and the declared requirements (python3, pip) and the primary credential (ANTHROPIC_API_KEY) are consistent with that purpose. The SKILL.md also documents support for multiple providers (OpenAI, Google, Kimi, Ollama) and expects corresponding provider keys in config.yaml. Registry metadata lists no required env vars but does include primaryEnv=ANTHROPIC_API_KEY — a minor inconsistency but explainable (the router supports multiple provider keys in config rather than fixed env vars).
Instruction Scope
The runtime instructions are limited to cloning the repo, creating a venv, installing requirements, optionally pulling local models with Ollama, editing config.yaml/ROUTES.md, and running server.py (or creating an optional macOS LaunchAgent). The instructions reference provider API keys and local files used by the router (config.yaml, ROUTES.md), but do not instruct reading unrelated system files or exfiltrating data.
Install Mechanism
This is an instruction-only skill (no install spec). The SKILL.md instructs cloning the public GitHub repo and running pip install -r requirements.txt — a conventional install path. No high-risk downloads or obscure URLs are used in the provided instructions.
Credentials
The skill declares a primary credential (ANTHROPIC_API_KEY) which is reasonable for using Anthropic as a provider. SKILL.md also expects other provider keys to be added to config.yaml when using those providers; the registry metadata's 'Required env vars: none' is slightly inconsistent with examples in the docs that use ANTHROPIC_API_KEY in an Authorization header. Overall the amount of credential access requested is proportional to a multi-provider router, but users should expect to supply multiple provider keys in configuration.
Persistence & Privilege
The skill does not request always:true and is user-invocable. The only persistence step in the docs is an optional macOS LaunchAgent recipe the user can install to run the server at boot; this is explicitly optional (and the server defaults to binding 127.0.0.1). No instructions attempt to modify other skills or system-wide agent configuration.
Assessment
This skill is an instruction-only wrapper around an open-source LLM router. Before installing: 1) Review the upstream repository (https://github.com/alexrudloff/llmrouter) and inspect server.py and config.yaml to understand how API keys are used and stored. 2) Expect to provide API keys for any providers you want to use (Anthropic is shown as primary; add OpenAI/Google/Kimi keys to config.yaml as needed). 3) Run it in an isolated environment (virtualenv, container, or VM) and bind to localhost (default 127.0.0.1) unless you explicitly intend to expose it. 4) If you install the optional LaunchAgent/service, be aware it will auto-start the router at boot — verify authentication and logs before enabling. 5) Because the skill package itself contains only documentation (no code), the runtime behavior depends entirely on the external repo code you clone — verify that code before executing pip install or python server.py.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e2mmppb27d848tfbmc8k2a180dqtd

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🔀 Clawdis
OSmacOS · Linux
Binspython3
Any binpip, pip3
Primary envANTHROPIC_API_KEY

Comments