Model-Selector

v1.0.0

A powerful model routing skill that analyzes query intent and cost-efficiency to select the optimal LLM (Elite/Balanced/Basic) before execution.

0· 292·2 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name/description (model routing between Elite/Balanced/Basic) aligns with the included Python code which contains a ModelRouter and training helper. However the registry/README claims ClawHub-optimized tiers and 'Multi-Provider Support' while the skill itself only returns model identifiers (it does not perform provider authentication or API calls). The requirements.txt includes 'litellm' even though the code never imports or uses it — this is unnecessary and disproportionate to the stated functionality.
!
Instruction Scope
SKILL.md instructs agents to call a tool named get_optimal_model / shows an example using router.analyze_and_route(), but the provided code exposes ModelRouter.route() (no analyze_and_route or get_optimal_model). This mismatch means the SKILL.md and code are inconsistent. The code logs every routed query to a local file (query_history.json) and exposes a 'refine_keywords' / train_router.py flow that reads that history; storing user queries on disk is a privacy risk and could be used later to aggregate sensitive inputs. The 'refine' flow mentions using an external LLM to suggest new keywords — that could lead to sending logged queries to third-party models if implemented later (the current code does not itself send data externally, but the design enables it).
Install Mechanism
There is no install spec (instruction-only install), so nothing is automatically downloaded or executed by an installer; this lowers supply-chain risk. However the bundle includes a requirements.txt declaring heavy ML libraries (torch, sentence-transformers) which are large and may be installed by a user; those are plausible for the code but could be unexpected. No network download URLs or extract steps are present in the skill metadata.
Credentials
The skill requests no environment variables or credentials (none declared), which is proportionate for a router that only suggests model identifiers. Note: to actually execute calls against the named providers the agent/host will need provider-specific API keys separate from this skill; those are not requested by the skill itself.
Persistence & Privilege
The skill writes and reads a local file query_history.json for rolling adjustment and can keep up to 1000 entries. It does not request always:true and does not modify other skills. Persisting user queries to disk is a modest persistence/privacy concern (sensitive prompts could be retained).
What to consider before installing
This skill appears to do what it claims (choose a model tier) but has a few red flags you should consider before installing: - Code/instruction mismatch: SKILL.md references get_optimal_model and analyze_and_route, but the code provides ModelRouter.route(). Confirm the actual tool interface the agent will call and update the README or code so they match. - Local logging: It writes query_history.json with users' queries. If prompts may contain secrets or sensitive data, disable or audit this logging, restrict file permissions, or store history encrypted/ephemeral. - Unused/large dependencies: requirements.txt includes litellm and heavy ML libs (torch, sentence-transformers). If you install dependencies, do so in an isolated environment (venv/container) and consider trimming unused packages. - Minor bugs: The code uses torch.* inside ModelRouter.route but does not import torch at the top-level; running with semantic encoder enabled may raise a NameError. Test in a safe environment before trusting in production. If you want to proceed: run the skill in an isolated sandbox, inspect and clean up query_history.json, reconcile SKILL.md and code function names, and verify you understand where and how any later 'refine' step would send data (avoid sending raw history to third-party LLMs unless you have consent and appropriate safeguards). If you prefer, ask the publisher for a fixed version that removes logging by default, aligns interface names, and documents exactly how refinement would be performed.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ew2n6m9sggp0w341e5sd1td81xpt4

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Semantic Model Orchestrator

This skill provides an intelligent middle layer for AI agents to decide which model tier should handle a specific task. By using semantic analysis, it categorized queries into Elite, Balanced, or Basic levels.

Features

  • Semantic Intent Recognition: Uses vector embeddings to detect query complexity.
  • Cost-Efficiency Orchestration: Routes queries to Elite, Balanced, or Basic models.
  • ClawHub Optimized: Default tiers for Claude 3.5 Sonnet, GPT-4o-mini, and DeepSeek.
  • Rolling Adjustment: Built-in logic to refine intent keywords from user history.
  • Multi-Provider Support: Supports OpenAI, Anthropic, Gemini, and DeepSeek.

Model Tiers

  • Elite: anthropic/claude-3-5-sonnet-latest
  • Balanced: openai/gpt-4o-mini
  • Basic: deepseek/deepseek-chat

Usage

Add this skill to your agent's capability list. The agent will call the get_optimal_model tool before making main LLM calls to optimize performance and budget.

Example Tool Call

result = router.analyze_and_route("Design a high-scalable microservices architecture for a fintech app.")
# Returns: {"tier": "ELITE", "suggested_model": "anthropic/claude-3-5-sonnet-latest"}

Files

5 total
Select a file
Select a file to preview.

Comments

Loading comments…