Install
openclaw skills install lm-studioRun and integrate LM Studio with local model lifecycle control, OpenAI-compatible APIs, embeddings, and MCP-aware workflows.
openclaw skills install lm-studioUser wants to run local models with LM Studio, connect an app to its local server, or debug weak local inference behavior.
Use this for server readiness, model loading, OpenAI-compatible API integration, embeddings, MCP setup, and local-first operating decisions.
Memory lives in ~/lm-studio/. If ~/lm-studio/ does not exist, run setup.md. See memory-template.md for structure.
~/lm-studio/
├── memory.md # Activation, preferred port, known-good defaults
├── server-notes.md # Reachability checks and server mode notes
├── model-profiles.md # Verified models by workload
└── incidents.md # Repeated failures and confirmed fixes
| Topic | File |
|---|---|
| Setup behavior and activation boundaries | setup.md |
| Memory schema and status states | memory-template.md |
| Server startup and smoke tests | server-workflows.md |
| Download, load, unload, and swap models | model-lifecycle.md |
| OpenAI-compatible request patterns | api-recipes.md |
| MCP connection patterns and guardrails | mcp-playbooks.md |
| Symptom-based debugging | troubleshooting.md |
llmster is already installed on the machine.curl and jq are available for smoke tests and response inspection.lms is optional but preferred for repeatable server and model operations.server-workflows.md to confirm the actual port, endpoint reachability, and model visibility.model-lifecycle.md for discovery, loading, unloading, and verification.api-recipes.md and change only the base URL and model identifier before rewriting an existing client.responses, chat/completions, embeddings, or completions.mcp-playbooks.md to connect servers, but debug model serving and MCP behavior independently.1234 without checking reachability -> integrations fail even though the app looks healthy.GET /v1/models as proof a model is ready -> Just-In-Time listings can appear before a usable runtime is confirmed.Data that leaves your machine:
localhost server calls.Data that stays local:
~/lm-studio/ if the user wants persistent context.This skill does NOT:
Install with clawhub install <slug> if user confirms:
models — Choose models by workload, context budget, and quality tradeoffs.api — Shape request payloads, retries, parsing, and integration debugging.self-host — Operate local infrastructure with practical reliability and security habits.open-router — Escalate from local-first execution to routed cloud models when capability gaps matter.docker — Package helper services or MCP servers consistently on the local machine.clawhub star lm-studioclawhub sync