mlx-local-inference

v2.2.1

Use when calling local AI on this Mac — text generation, embeddings, speech-to-text, OCR, or image understanding. LLM/VLM via oMLX gateway at localhost:8000/...

2· 706·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (local inference via oMLX and uv) align with the runtime instructions: calls to localhost:8000, uv run invocations, and references to ~/models are consistent with running models locally on macOS.
Instruction Scope
SKILL.md only instructs the agent to call a local HTTP API, run uv to invoke Python model libraries, read model files under ~/models, and use launchctl for the local oMLX service. It does not attempt to read unrelated system files, request unrelated environment variables, or exfiltrate data to remote endpoints.
Install Mechanism
There is no formal install spec in the registry; the SKILL.md recommends installing uv via 'curl -LsSf https://astral.sh/uv/install.sh | sh'. Download-and-exec installer instructions are common for CLIs but are higher risk than package manager installs — verify the source before running. No other installers or remote code downloads are required by the skill itself.
Credentials
The skill requests no environment variables or credentials and only requires the 'uv' binary and an Apple Silicon macOS environment, which is proportionate to local model execution. Model files are referenced under the user's home (~), which is expected.
Persistence & Privilege
The skill is not always-on, does not request elevated platform privileges, and does not modify other skills or system-wide configs beyond invoking a user launchctl command to restart the local oMLX service (which affects only the user's launchd job).
Assessment
This skill appears to do what it says: operate local ML models via a local oMLX gateway and the 'uv' runner. Before installing or following the SKILL.md: 1) Verify the 'uv' installer URL (https://astral.sh) is legitimate to you — avoid running arbitrary curl | sh unless you trust the source; prefer a package manager if available. 2) Confirm you have enough disk space and have placed large model files under ~/models as described. 3) The skill talks to localhost:8000 only; ensure that oMLX is intentionally running and not exposed to untrusted networks. 4) Restarting via launchctl affects only your user service; it requires appropriate user permissions but is not a system-wide change. If you need higher assurance, ask the skill author for a formal install spec or an auditable package (Homebrew formula or repository) and for cryptographic checksums for any model or installer downloads.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f8czq1mx5v9fg33vwnnbnfx832rmb

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

OSmacOS
Any binuv

Comments