Mlx Whisper

v1.0.0

Local speech-to-text with MLX Whisper (Apple Silicon optimized, no API key).

1· 3.6k·18 current·22 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description say local MLX Whisper for Apple Silicon; SKILL.md requires the mlx_whisper binary and documents usage and models. The declared install hint (pip package 'mlx-whisper') matches the stated capability. No unrelated credentials, binaries, or config paths are requested.
Instruction Scope
Instructions are simple command examples invoking the local mlx_whisper binary on audio/video files, and note where models cache (~/.cache/huggingface/). They do not instruct reading unrelated system files, exfiltrating data, or accessing unrelated env vars.
Install Mechanism
The SKILL.md contains an install metadata entry recommending 'pip install mlx-whisper'. This is a normal distribution mechanism for Python CLIs, but it means code will be installed from PyPI (moderate trust requirement) and the binary will run locally. Model files are downloaded on first use (network activity) — expected but worth noting.
Credentials
No environment variables, credentials, or config paths are required by the skill beyond the documented model cache path. That is proportionate to a local transcription tool that downloads public models.
Persistence & Privilege
Skill is not always-enabled and is user-invocable; it does not request persistent system-level changes or modify other skills' configurations. Autonomous invocation is allowed (platform default) but there are no additional privilege escalations requested.
Assessment
This skill is coherent for local speech-to-text on Apple Silicon, but before installing: (1) confirm you trust the 'mlx-whisper' PyPI package and its maintainer, (2) expect large model downloads (~100MB–3GB) to ~/.cache/huggingface/ (ensure disk space), and (3) run installs in a virtualenv or isolated environment if you want to limit risk. Network activity to download models is expected; if you require offline-only operation, verify models are pre-downloaded and trusted. If you need stronger assurances, review the upstream project source (the GitHub link) before installing/running the binary.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bdd5ezd29v2xbb9sb20a4gs7zv9q7

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🍎 Clawdis
Binsmlx_whisper

Comments