Dynamic Model Selector
v1.0.0Dynamically select the best AI model for a task based on complexity, cost, and availability in GitHub Copilot. Use when deciding between free/paid models, or when you want automatic model routing based on query analysis.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill claims to pick the best model 'available in GitHub Copilot' but its examples list model names spanning multiple vendors (gpt-4o, claude-3.5-sonnet, grok-code-fast-1). That may be accurate only if the skill is multi-provider, but the description and metadata emphasize GitHub Copilot specifically, so the mapping between claimed purpose and the actual model inventory is unclear.
Instruction Scope
SKILL.md instructs the agent to 'run the classification script' but gives no runtime instructions (no declared Python requirement, no CLI invocation, no input/output contract). The instructions otherwise stay on-topic and do not request unrelated files or credentials.
Install Mechanism
There is no install spec (lowest install risk) but a bundled script (scripts/classify_task.py) exists. The skill does not declare required binaries, yet the script implies a Python runtime — this mismatch should be resolved so users know how to run it.
Credentials
The skill requests no environment variables, no credentials, and no config paths, which is proportional for a local classification helper. There are no declared secrets or broad credential requirements.
Persistence & Privilege
No elevated privileges are requested: always is not set, model invocation settings are default, and the skill does not ask for permanent presence or special platform hooks.
What to consider before installing
This skill is plausible but has small inconsistencies you should resolve before trusting it. Steps to take before installing or running:
- Open and read scripts/classify_task.py to confirm what it does: check for network calls, subprocess execution, or attempts to read files/credentials. If you are not comfortable inspecting it yourself, run it in an isolated sandbox.
- Ensure you have the required runtime (likely Python). The SKILL.md should state how to run the script (python version, CLI args). Ask the author to add explicit run instructions.
- Verify the models referenced in references/models.md actually map to models available in your environment (GitHub Copilot) — the examples include names from multiple providers, which may be inaccurate for Copilot-only routing.
- Confirm there are no hidden data exfiltration behaviors (outbound network, telemetry) in the script before giving it access to real queries.
- Because the source and homepage are unknown, treat this as untrusted code until you inspect it or obtain provenance from the publisher.Like a lobster shell, security has layers — review code before you run it.
latest
Dynamic Model Selector
Overview
This skill analyzes user queries to recommend the optimal AI model from available GitHub Copilot options, balancing performance, cost, and task requirements.
How to Use
- Provide the user query or task description.
- Run the classification script to analyze complexity.
- Choose the suggested model or adjust based on preferences.
Classification Criteria
- Simple tasks (short responses, basic chat): Use faster, free models like grok-code-fast-1.
- Complex reasoning (analysis, multi-step): Use advanced models like gpt-4o or claude-3.5-sonnet.
- Code generation: Prefer code-optimized models.
- Cost sensitivity: Favor free models when possible.
Example Usage
For a query like "Explain quantum computing": Classify as medium complexity -> Recommend gpt-4o.
For "Write a Python function to sort a list": Classify as code task -> Recommend grok-code-fast-1.
Resources
scripts/
classify_task.py: Analyzes the query and outputs model recommendation.
references/
models.md: Detailed list of available models, pros/cons, costs.
Comments
Loading comments...
