Install
openclaw skills install lmstudio-model-switchSwitch AI models on-the-fly between local LM Studio and cloud Kimi API in OpenClaw with simple commands and automatic gateway restart.
openclaw skills install lmstudio-model-switchFast model switching between LM Studio local and Kimi API for OpenClaw.
Switch your agent's AI model on-the-fly between local (LM Studio) and cloud (Kimi API) providers with a single command.
# Clone to your OpenClaw skills directory
git clone https://github.com/yourusername/lmstudio-model-switch \
~/.openclaw/workspace/skills/lmstudio-model-switch
# Or manually copy
cp -r lmstudio-model-switch ~/.openclaw/workspace/skills/
| Command | Description |
|---|---|
/switch-model status | Show current model and available providers |
/switch-model local | Switch to LM Studio (Qwen 3.5 9B default) |
/switch-model local <model> | Switch to specific local model |
/switch-model api | Switch to Kimi K2.5 API |
/switch-model kimi | Alias for /switch-model api |
# Check current status
/switch-model status
# Switch to local LM Studio
/switch-model local
# Switch to specific model
/switch-model local mistral-small-24b
# Switch to Kimi API
/switch-model api
Add to your openclaw.json:
{
"skills": {
"lmstudio-model-switch": {
"enabled": true,
"config": {
"local": {
"baseUrl": "http://127.0.0.1:1234/v1",
"defaultModel": "qwen/qwen3.5-9b"
},
"api": {
"provider": "kimi-coding",
"model": "k2p5"
}
}
}
}
}
openclaw.json"primary" model in agents.defaultsUse local when handling:
Use API when needing:
Switch to API when:
# Check if LM Studio is running
curl http://127.0.0.1:1234/api/v0/models
# Restart LM Studio if needed
killall lmstudio; sleep 2; lmstudio &
python3 -m json.tool ~/.openclaw/openclaw.jsoncp ~/.openclaw/openclaw.json.bak.* ~/.openclaw/openclaw.json# Check service status
systemctl --user status openclaw-gateway
# Manual restart
systemctl --user restart openclaw-gateway
WarMech - OpenClaw Community
MIT
2026-03-14 - v1.0.0