semantic-model-router

v1.0.3

Smart LLM Router — routes every query to the cheapest capable model. Supports 17 models across Anthropic, OpenAI, Google, DeepSeek & xAI (Grok). Uses a pre-t...

0· 396·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description describe a router that selects the cheapest capable model. The included code and weights implement a local classifier/selector (ModelRouter) — they do not themselves call provider APIs or require provider credentials. That is coherent, but the description could mislead users into thinking the skill will automatically invoke external LLM provider endpoints; it only selects/labels a model id and reports costs.
!
Instruction Scope
SKILL.md states “Zero external calls” and “no API keys needed.” The code instantiates SentenceTransformer('all-MiniLM-L6-v2') when available, which will normally download model files from Hugging Face (an external network activity) if not already cached. The router also writes a local history_file (query_history.json). There are no explicit network/post endpoints or credential access in the visible code, but the potential for implicit external downloads via dependencies contradicts the SKILL.md claim.
Install Mechanism
No explicit install spec (instruction-only), but requirements.txt lists sentence-transformers and numpy. Installing those will pull in heavy Python packages and may cause additional downloads (tokenizers, huggingface components). No remote arbitrary archive downloads or unknown URLs were observed in the files provided.
Credentials
The skill declares no required environment variables, no credentials, and no config paths. The code likewise does not reference environment secrets or external API keys. This is proportionate to a local classifier/selector.
Persistence & Privilege
The skill is not always-enabled and does not request elevated system privileges. It writes a local query_history.json by default (user-writable), and loads local model files if present. It does not modify other skill configs or global agent settings.
What to consider before installing
This skill appears to be a local classifier that selects which provider/model would be most cost-efficient — it does not itself call provider APIs or require provider API keys. However: - SKILL.md's “zero external calls” is misleading: if sentence-transformers is installed and used, the encoder (all-MiniLM-L6-v2) will normally be downloaded from the internet (Hugging Face) unless already cached. If you require strictly offline behavior, do not install or instantiate the encoder, or ensure the model is pre-cached in a controlled environment. - The skill writes a local file named query_history.json by default; review/relocate it if you want to avoid storing queries on disk. - If you expected the skill to automatically route and invoke remote LLMs for you, note that it only selects/labels models and reports costs — you still need the provider integrations and credentials to actually call chosen models. Recommendations before installing: 1. Review the full model_router.py and model_weights.py contents locally (they are auditable). 2. If you want zero network activity, avoid installing/using sentence-transformers or pre-cache the embedding model in a private environment. 3. Run the skill in a sandbox or isolated environment first to observe whether it attempts to download models. 4. Confirm you understand that cost savings are estimates based on the embedded price catalog and that no provider billing or credential handling is included.

Like a lobster shell, security has layers — review code before you run it.

latestvk972fp99nsanbttzk6hs0snnq981xgp0

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments