Offload Tasks to LM Studio Models

PassAudited by ClawScan on May 10, 2026.

Overview

The skill appears to do what it says—use your configured LM Studio server—but prompts are sent to that server and optional state/logging can retain local context.

This skill is reasonable to install if you run and trust LM Studio locally. Before using it with private data, confirm the API URL still points to localhost or another trusted server, and be deliberate about stateful mode, request/response logging, and model unload options.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

The agent may use local CPU/GPU memory and can load or unload selected LM Studio model instances while completing a task.

Why it was flagged

The skill intentionally gives the agent local command/API workflows for chatting with LM Studio and optionally loading or unloading models. This can affect local LM Studio resources, but it matches the skill's purpose and is not hidden.

Skill content
From the skill folder: node scripts/lmstudio-api.mjs <model> '<task>' [options]. ... Or run scripts/unload.mjs <model_key>
Recommendation

Use this skill when you are comfortable letting the agent manage LM Studio models; avoid optional unload actions if you want models left running.

What this means

If the API URL is changed to a non-local or untrusted server, prompts and task content could be sent outside the machine.

Why it was flagged

Task content is sent to the configured LM Studio API URL. The default is localhost, but the environment variable or --api-url option can point elsewhere.

Skill content
const BASE_URL = process.env.LM_STUDIO_API_URL || 'http://127.0.0.1:1234'; ... input: taskContent
Recommendation

Keep LM_STUDIO_API_URL and --api-url set to localhost or another endpoint you explicitly trust, especially for private documents or secrets.

What this means

Local LM Studio state or the .lmstudio-state file may link future calls to prior context, and optional logs can contain prompt and response data.

Why it was flagged

The chat payload asks LM Studio to store the exchange, and optional stateful mode persists a response_id in the current working directory for later continuation.

Skill content
store: true ... const STATE_FILE = path.join(process.cwd(), '.lmstudio-state'); ... writeState(model, result.response_id)
Recommendation

Use --stateful and --log only when needed, clear .lmstudio-state or log files after sensitive work, and review LM Studio's local retention settings.