Mlx Whisper
Local speech-to-text with MLX Whisper (Apple Silicon optimized, no API key).
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 1 · 2.8k · 15 current installs · 15 all-time installs
MIT-0
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description say local MLX Whisper for Apple Silicon; SKILL.md requires the mlx_whisper binary and documents usage and models. The declared install hint (pip package 'mlx-whisper') matches the stated capability. No unrelated credentials, binaries, or config paths are requested.
Instruction Scope
Instructions are simple command examples invoking the local mlx_whisper binary on audio/video files, and note where models cache (~/.cache/huggingface/). They do not instruct reading unrelated system files, exfiltrating data, or accessing unrelated env vars.
Install Mechanism
The SKILL.md contains an install metadata entry recommending 'pip install mlx-whisper'. This is a normal distribution mechanism for Python CLIs, but it means code will be installed from PyPI (moderate trust requirement) and the binary will run locally. Model files are downloaded on first use (network activity) — expected but worth noting.
Credentials
No environment variables, credentials, or config paths are required by the skill beyond the documented model cache path. That is proportionate to a local transcription tool that downloads public models.
Persistence & Privilege
Skill is not always-enabled and is user-invocable; it does not request persistent system-level changes or modify other skills' configurations. Autonomous invocation is allowed (platform default) but there are no additional privilege escalations requested.
Assessment
This skill is coherent for local speech-to-text on Apple Silicon, but before installing: (1) confirm you trust the 'mlx-whisper' PyPI package and its maintainer, (2) expect large model downloads (~100MB–3GB) to ~/.cache/huggingface/ (ensure disk space), and (3) run installs in a virtualenv or isolated environment if you want to limit risk. Network activity to download models is expected; if you require offline-only operation, verify models are pre-downloaded and trusted. If you need stronger assurances, review the upstream project source (the GitHub link) before installing/running the binary.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🍎 Clawdis
Binsmlx_whisper
SKILL.md
MLX Whisper
Local speech-to-text using Apple MLX, optimized for Apple Silicon Macs.
Quick Start
mlx_whisper /path/to/audio.mp3 --model mlx-community/whisper-large-v3-turbo
Common Usage
# Transcribe to text file
mlx_whisper audio.m4a -f txt -o ./output
# Transcribe with language hint
mlx_whisper audio.mp3 --language en --model mlx-community/whisper-large-v3-turbo
# Generate subtitles (SRT)
mlx_whisper video.mp4 -f srt -o ./subs
# Translate to English
mlx_whisper foreign.mp3 --task translate
Models (download on first use)
| Model | Size | Speed | Quality |
|---|---|---|---|
| mlx-community/whisper-tiny | ~75MB | Fastest | Basic |
| mlx-community/whisper-base | ~140MB | Fast | Good |
| mlx-community/whisper-small | ~470MB | Medium | Better |
| mlx-community/whisper-medium | ~1.5GB | Slower | Great |
| mlx-community/whisper-large-v3 | ~3GB | Slowest | Best |
| mlx-community/whisper-large-v3-turbo | ~1.6GB | Fast | Excellent (Recommended) |
Notes
- Requires Apple Silicon Mac (M1/M2/M3/M4)
- Models cache to
~/.cache/huggingface/ - Default model is
mlx-community/whisper-tiny; use--model mlx-community/whisper-large-v3-turbofor best results
Files
1 totalSelect a file
Select a file to preview.
Comments
Loading comments…
