MLX Swift LM Expert
Analysis
This documentation-only MLX Swift skill appears coherent and benign, with normal cautions around model downloads, optional Hugging Face tokens, tool calling, and local prompt/document storage.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
case .toolCall(let toolCall): ... let result = try await toolCall.execute(with: weatherTool)
The documentation shows a model-generated tool call being executed through an application-defined handler. This is central to the stated tool-calling feature, but high-impact tools should still require validation or user approval.
Download: Model weights fetched from HuggingFace (cached locally)
The skill documents loading third-party model artifacts from Hugging Face and caching them locally. This is expected for MLX model usage, but model provenance and revisions affect trust.
Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.
let hub = HubApi(hfToken: "your_token") ... configuration: .init(id: "private/model")
The documentation includes an optional Hugging Face token for private model access. This is purpose-aligned, with no evidence of logging or unrelated transmission, but it is still account credential use.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
try savePromptCache( url: fileURL, cache: cache, metadata: ["prompt": "My cached prompt"] )
The docs show saving prompt cache state with prompt metadata. This is an expected performance feature, but it can persist sensitive prompts or reused context locally.
