Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Model Router

A comprehensive AI model routing system that automatically selects the optimal model for any task. Set up multiple AI providers (Anthropic, OpenAI, Gemini, Moonshot, Z.ai, GLM) with secure API key storage, then route tasks to the best model based on task type, complexity, and cost optimization. Includes interactive setup wizard, task classification, and cost-effective delegation patterns. Use when you need "use X model for this", "switch model", "optimal model", "which model should I use", or to balance quality vs cost across multiple AI providers.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
5 · 3.3k · 37 current installs · 40 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (model routing across multiple providers) matches the provided code and docs: a local classifier and an interactive setup wizard that stores provider API keys and writes routing config. However the SKILL.md claims "Encrypted at rest" as a storage feature; the code only writes a plaintext file (~/.model-router/.api-keys) with file permissions 600 and does not perform encryption itself. That claim is therefore an overstatement dependent on the user's OS-level encryption, not implemented by the skill.
Instruction Scope
Runtime instructions are confined to local actions: run the setup wizard, run the classifier, and use an external sessions_spawn tool to spawn model sessions. The setup wizard collects API keys (hidden input) and writes them to ~/.model-router/.api-keys; the classifier performs only local keyword matching. The SKILL.md tells users to never commit keys and to use env vars in production. No instructions attempt to read unrelated system files or network-exfiltrate keys, but the skill does create a plaintext keys file (expected for purpose but notable). The docs also reference an external CLI (sessions_spawn) not provided by the skill.
Install Mechanism
No install spec is present; this is an instruction-and-script-only skill. Nothing is downloaded or executed automatically beyond the included Python scripts. This lowers risk compared to remote installers.
Credentials
The skill requests no declared environment variables and no primary credential. It does, however, store provider API keys in ~/.model-router/.api-keys under names like PROVIDER_API_KEY and optionally PROVIDER_BASE_URL. That is proportionate to the stated purpose (it must hold API keys to call providers), but storing keys in a plaintext file (even with 600 perms) may be weaker than users expect given the SKILL.md's claim of "Encrypted at rest."
Persistence & Privilege
The skill does not request always:true or elevated platform privileges. It only writes to its own directory (~/.model-router) and its own config files. It does not modify other skills' configs or claim broad system access.
What to consider before installing
What to consider before installing: - The included scripts implement a local classifier and an interactive setup wizard that saves your provider API keys to ~/.model-router/.api-keys (format: KEY=VALUE). The wizard hides input when you type the key and sets file permissions to 600, which is reasonable but not the same as encrypting keys. - SKILL.md states "Encrypted at rest (via OS filesystem encryption)" — the skill does NOT encrypt keys itself. That statement only applies if your system already uses disk encryption. If you need stronger protection, use a platform secret store (OS keyring, HashiCorp Vault, cloud secret manager), or modify the scripts to encrypt the keys before writing. - The skill does not declare any environment variables or remote installers and the code does not contain network/exfiltration logic. Still, verify that the sessions_spawn tool or other external CLIs it expects are genuine and available on your system before running sample commands. - Do not commit ~/.model-router/.api-keys or config.json to version control. Rotate keys after setup as the documentation advises. - If you want higher assurance, inspect the scripts locally (they are small and readable) and consider replacing plaintext key storage with an encrypted keystore or using environment variables / secrets manager for production deployments. Summary recommendation: the skill appears to be what it claims, but contains a misleading storage claim and uses plaintext key files—treat it cautiously (review or harden key storage) before trusting it with production API keys.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.1.0
Download zip
latestvk979r5c8345mh2phyx515skecx7zmfsrmulti-providervk979r5c8345mh2phyx515skecx7zmfsrsetup-wizardvk979r5c8345mh2phyx515skecx7zmfsrv1.1vk979r5c8345mh2phyx515skecx7zmfsr

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Model Router

Intelligent AI model routing across multiple providers for optimal cost-performance balance.

Automatically select the best model for any task based on complexity, type, and your preferences. Support for 6 major AI providers with secure API key management and interactive configuration.

🎯 What It Does

  • Analyzes tasks and classifies them by type (coding, research, creative, simple, etc.)
  • Routes to optimal models from your configured providers
  • Optimizes costs by using cheaper models for simple tasks
  • Secures API keys with file permissions (600) and isolated storage
  • Provides recommendations with confidence scoring and reasoning

🚀 Quick Start

Step 1: Run the Setup Wizard

cd skills/model-router
python3 scripts/setup-wizard.py

The wizard will guide you through:

  1. Provider setup - Add your API keys (Anthropic, OpenAI, Gemini, etc.)
  2. Task mappings - Choose which model for each task type
  3. Preferences - Set cost optimization level

Step 2: Use the Classifier

# Get model recommendation for a task
python3 scripts/classify_task.py "Build a React authentication system"

# Output:
# Recommended Model: claude-sonnet
# Confidence: 85%
# Cost Level: medium
# Reasoning: Matched 2 keywords: build, system

Step 3: Route Tasks with Sessions

# Spawn with recommended model
sessions_spawn --task "Debug this memory leak" --model claude-sonnet

# Use aliases for quick access
sessions_spawn --task "What's the weather?" --model haiku

📊 Supported Providers

ProviderModelsBest ForKey Format
Anthropicclaude-opus-4-5, claude-sonnet-4-5, claude-haiku-4-5Coding, reasoning, creativesk-ant-...
OpenAIgpt-4o, gpt-4o-mini, o1-mini, o1-previewTools, deep reasoningsk-proj-...
Geminigemini-2.0-flash, gemini-1.5-pro, gemini-1.5-flashMultimodal, huge context (2M)AIza...
Moonshotmoonshot-v1-8k/32k/128kChinese languagesk-...
Z.aiglm-4.5-air, glm-4.7Cheapest, fastVarious
GLMglm-4-flash, glm-4-plus, glm-4-0520Chinese, codingID.secret

🎛️ Task Type Mappings

Default routing (customizable via wizard):

Task TypeDefault ModelWhy
simpleglm-4.5-airFastest, cheapest for quick queries
codingclaude-sonnet-4-5Excellent code understanding
researchclaude-sonnet-4-5Balanced depth and speed
creativeclaude-opus-4-5Maximum creativity
matho1-miniSpecialized reasoning
visiongemini-1.5-flashFast multimodal
chineseglm-4.7Optimized for Chinese
long_contextgemini-1.5-proUp to 2M tokens

💰 Cost Optimization

Aggressive Mode

Always uses the cheapest capable model:

  • Simple → glm-4.5-air (~10% cost)
  • Coding → claude-haiku-4-5 (~25% cost)
  • Research → claude-sonnet-4-5 (~50% cost)

Savings: 50-90% compared to always using premium models

Balanced Mode (Default)

Considers cost vs quality:

  • Simple tasks → Cheap models
  • Critical tasks → Premium models
  • Automatic escalation if cheap model fails

Quality Mode

Always uses the best model regardless of cost

🔒 Security

API Key Storage

~/.model-router/
├── config.json       # Model mappings (chmod 600)
└── .api-keys         # API keys (chmod 600)

Features:

  • File permissions restricted to owner (600)
  • Isolated from version control
  • Encrypted at rest (via OS filesystem encryption)
  • Never logged or printed

Best Practices

  1. Never commit .api-keys to version control
  2. Use environment variables for production deployments
  3. Rotate keys regularly via the wizard
  4. Audit access with ls -la ~/.model-router/

📖 Usage Examples

Example 1: Cost-Optimized Workflow

# Classify task first
python3 scripts/classify_task.py "Extract prices from this CSV"

# Result: simple task → use glm-4.5-air
sessions_spawn --task "Extract prices" --model glm-4.5-air

# Then analyze with better model if needed
sessions_spawn --task "Analyze price trends" --model claude-sonnet

Example 2: Progressive Escalation

# Try cheap model first (60s timeout)
sessions_spawn --task "Fix this bug" --model glm-4.5-air --runTimeoutSeconds 60

# If fails, escalate to premium
sessions_spawn --task "Fix complex architecture bug" --model claude-opus

Example 3: Parallel Processing

# Batch simple tasks in parallel with cheap model
sessions_spawn --task "Summarize doc A" --model glm-4.5-air &
sessions_spawn --task "Summarize doc B" --model glm-4.5-air &
sessions_spawn --task "Summarize doc C" --model glm-4.5-air &
wait

Example 4: Multimodal with Gemini

# Vision task with 2M token context
sessions_spawn --task "Analyze these 100 images" --model gemini-1.5-pro

🛠️ Configuration Files

~/.model-router/config.json

{
  "version": "1.1.0",
  "providers": {
    "anthropic": {
      "configured": true,
      "models": ["claude-opus-4-5", "claude-sonnet-4-5", "claude-haiku-4-5"]
    },
    "openai": {
      "configured": true,
      "models": ["gpt-4o", "gpt-4o-mini", "o1-mini", "o1-preview"]
    }
  },
  "task_mappings": {
    "simple": "glm-4.5-air",
    "coding": "claude-sonnet-4-5",
    "research": "claude-sonnet-4-5",
    "creative": "claude-opus-4-5"
  },
  "preferences": {
    "cost_optimization": "balanced",
    "default_provider": "anthropic"
  }
}

~/.model-router/.api-keys

# Generated by setup wizard - DO NOT edit manually
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-proj-...
GEMINI_API_KEY=AIza...

🔄 Version 1.1 Changes

New Features

  • Interactive setup wizard for guided configuration
  • Secure API key storage with file permissions
  • Task-to-model mapping customization
  • Multi-provider support (6 providers)
  • Cost optimization levels (aggressive/balanced/quality)

Improvements

  • ✅ Better task classification with confidence scores
  • ✅ Provider-specific model recommendations
  • ✅ Enhanced security with isolated storage
  • ✅ Comprehensive documentation

Migration from 1.0

Run the setup wizard to reconfigure:

python3 scripts/setup-wizard.py

📚 Command Reference

Setup Wizard

python3 scripts/setup-wizard.py

Interactive configuration of providers, mappings, and preferences.

Task Classifier

python3 scripts/classify_task.py "your task description"
python3 scripts/classify_task.py "your task" --format json

Get model recommendation with reasoning.

List Models

python3 scripts/setup-wizard.py --list

Show all available models and their status.

🤝 Integration with Other Skills

SkillIntegration
model-usageTrack cost per provider to optimize routing
sessions_spawnPrimary tool for model delegation
session_statusCheck current model and usage

⚡ Performance Tips

  1. Start simple - Try cheap models first
  2. Batch tasks - Combine multiple simple tasks
  3. Use cleanup - Delete sessions after one-off tasks
  4. Set timeouts - Prevent runaway sub-agents
  5. Monitor usage - Track costs per provider

🐛 Troubleshooting

"No suitable model found"

  • Run setup wizard to configure providers
  • Check API keys are valid
  • Verify permissions on .api-keys file

"Module not found"

pip3 install -r requirements.txt  # if needed

Wrong model selected

  1. Customize task mappings via wizard
  2. Use explicit model in sessions_spawn --model
  3. Adjust cost optimization preference

📖 Additional Resources

Files

5 total
Select a file
Select a file to preview.

Comments

Loading comments…