Offload Tasks to LM Studio Models
Analysis
The skill appears to do what it advertises—send selected tasks to a configured LM Studio REST server—but users should keep the endpoint local and be aware of state/log persistence.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
exec command:"node scripts/lmstudio-api.mjs <model> '<task>' --temperature=0.7 --max-output-tokens=2000"
The skill instructs the agent to run local commands with task text as an argument. This is expected for the LM Studio helper workflow, but task text should be passed safely to avoid shell quoting issues.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
const BASE_URL = process.env.LM_STUDIO_API_URL || 'http://127.0.0.1:1234'; ... body: JSON.stringify(payload)
Task content is sent to the configured LM Studio API endpoint. The default is local, but an environment variable or --api-url can redirect the data boundary.
const STATE_FILE = path.join(process.cwd(), '.lmstudio-state'); ... store: true ... fs.writeFileSync(logPath, JSON.stringify({ request: payload, attempt }, null, 2) + '\n', { flag: 'a' });The helper sends store:true to LM Studio, can persist a response_id for --stateful use, and can write request/response logs when --log is supplied.
