Back to skill
v1.0.3

Offload Tasks to LM Studio Models

BenignClawScan verdict for this skill. Analyzed May 1, 2026, 4:53 AM.

Analysis

The skill appears to do what it advertises—send selected tasks to a configured LM Studio REST server—but users should keep the endpoint local and be aware of state/log persistence.

GuidanceThis skill is reasonable to install if you use LM Studio for local model offloading. Before using it for private content, confirm the API URL is local, avoid optional logging/stateful mode unless needed, and handle task text safely when running the documented shell commands.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Tool Misuse and Exploitation
SeverityLowConfidenceHighStatusNote
SKILL.md
exec command:"node scripts/lmstudio-api.mjs <model> '<task>' --temperature=0.7 --max-output-tokens=2000"

The skill instructs the agent to run local commands with task text as an argument. This is expected for the LM Studio helper workflow, but task text should be passed safely to avoid shell quoting issues.

User impactIf task text is pasted into a shell command without safe quoting, unusual characters in the task could affect the command line.
RecommendationUse careful argument escaping or a safer invocation pattern for untrusted task text; review load/unload commands before running them.
Sensitive data protection

Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.

Insecure Inter-Agent Communication
SeverityLowConfidenceHighStatusNote
scripts/lmstudio-api.mjs
const BASE_URL = process.env.LM_STUDIO_API_URL || 'http://127.0.0.1:1234'; ... body: JSON.stringify(payload)

Task content is sent to the configured LM Studio API endpoint. The default is local, but an environment variable or --api-url can redirect the data boundary.

User impactPrivacy-sensitive prompts stay local only if the configured API URL points to a trusted local LM Studio server.
RecommendationFor sensitive content, verify LM_STUDIO_API_URL and --api-url are unset or point to 127.0.0.1/local trusted infrastructure.
Memory and Context Poisoning
SeverityLowConfidenceHighStatusNote
scripts/lmstudio-api.mjs
const STATE_FILE = path.join(process.cwd(), '.lmstudio-state'); ... store: true ... fs.writeFileSync(logPath, JSON.stringify({ request: payload, attempt }, null, 2) + '\n', { flag: 'a' });

The helper sends store:true to LM Studio, can persist a response_id for --stateful use, and can write request/response logs when --log is supplied.

User impactPrompts, outputs, or conversation identifiers may remain in local LM Studio state or user-chosen log files.
RecommendationAvoid --stateful and --log for sensitive one-off tasks unless needed, and delete .lmstudio-state or log files when no longer required.