MLX Swift LM Expert

v1.0.0

MLX Swift LM - Run LLMs and VLMs on Apple Silicon using MLX. Covers local inference, streaming, tool calling, LoRA fine-tuning, and embeddings.

3· 1.9k·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (MLX Swift LM for Apple Silicon) match the documented behavior: loading models, running inference, streaming, tool-calling, LoRA training, and embeddings. Required capabilities (downloading models, reading model directories, saving adapter weights, local training) are consistent with the stated purpose. No unrelated credentials, binaries, or config paths are requested.
Instruction Scope
SKILL.md and reference docs instruct the agent/developer to download models (HuggingFace hub), load tokenizers, read local model/adapter directories, and save adapter/checkpoint files. This file and network I/O and local filesystem access are expected for a local model runtime. The instructions do reference cache paths (~/.cache/huggingface/...) and reading training data files via FileManager; those are within scope but users should be aware the skill will read/write model and training files on disk.
Install Mechanism
Instruction-only skill with no install spec and no code files to execute at install time. This is low-risk from an install mechanism perspective.
Credentials
The skill declares no required environment variables or credentials. It documents optional use of a HubApi with an hfToken for private models, which is appropriate. No unrelated secrets or multiple unrelated credentials are requested.
Persistence & Privilege
always:false and default invocation settings. The skill does not request permanent system presence or modify other skills or system-wide agent settings. It describes saving/loading adapter weights and model caches to user-specified file paths (normal for this domain).
Assessment
This skill is internally coherent for running and fine-tuning local models on Apple Silicon. Before installing/using it, consider: (1) model downloads: it will fetch model weights from HuggingFace (public models are fine; private models may require your HF token) — only load models from sources you trust; (2) disk and memory: large models require substantial disk/memory and will store caches and .safetensors files (the docs reference ~/.cache/huggingface and adapter.safetensors); (3) local file I/O: LoRA training and loading functions read training data from local directories and write checkpoints — ensure training data and target paths are correct and safe; (4) network: the skill expects network access to download models; if you have network restrictions, be aware of that; (5) no environment variables or installs are required by the skill itself, but you should still review any model code you load (model weights are data but can encode harmful behaviors). If you need higher assurance, ask the maintainer for the skill's source repository or a signed release, and only use trusted model IDs.

Like a lobster shell, security has layers — review code before you run it.

latestvk976cy6zs7dc40vgfq5da1td9580b2xg

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments