AI模型切换器
v1.0.0AI模型切换器:日常本地模型 + 复杂任务云模型的混合使用方案。根据任务类型自动选择最优模型,最大化利用本地模型(零token成本),最小化云模型token消耗。
⭐ 0· 78·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description (model switching, cost optimization) match the included files and instructions: the skill reads a local JSON config, shows status, logs switches, and generates openclaw commands to set models. Required binary is only openclaw (plus a recommended ollama for local models), which is proportionate.
Instruction Scope
Runtime instructions and the PowerShell script operate on the skill's config, stats, and log files and call 'openclaw config get' (and suggest running 'openclaw config set'). They do not exfiltrate data or call external endpoints. Note: the skill records model switches and token/count stats to local files; autoSwitch (true by default) can cause automatic switches which may increase cloud token usage.
Install Mechanism
No install spec or remote downloads; this is instruction-and-script-only, so nothing is fetched from external URLs or written outside the skill's directory except logs/config produced at runtime.
Credentials
The skill requests no environment variables or credentials. It references 'ollama' as recommended for local models and cloud model IDs in config, but it does not require API keys or other unrelated secrets.
Persistence & Privilege
always:false and user-invocable (normal). The skill writes its own config, logs, and stats under its config directory (expected). Because autoSwitch defaults to true, the agent could autonomously change active models (potentially switching to cloud models that incur token costs) — this is operational rather than malicious, but users should be aware.
Assessment
This skill appears to be what it says: a local/cloud model switcher implemented with scripts and local config files. Before installing or enabling auto-switching: 1) review the config/config.json location so you know where logs and stats are written; 2) if you don't want any automatic cloud usage or unexpected token costs, set "autoSwitch": false or change costStrategy to "aggressive" to favor local models; 3) verify OpenClaw and (if you want local inference) Ollama are installed and configured; 4) confirm cloud model credentials live in your OpenClaw/global configuration (not the skill) and understand any billing implications. If you want extra assurance, inspect the skill folder after installation to confirm no unexpected files or outbound network calls are present.Like a lobster shell, security has layers — review code before you run it.
latestvk978xh5e8k17hbjd04v0yd8je983qtvn
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🎩 Clawdis
Binsopenclaw
