Local Whisper
v1.0.0Local speech-to-text using OpenAI Whisper. Runs fully offline after model download. High quality transcription with multiple model sizes.
⭐ 12· 9.9k·69 current·74 all-time
by@araa47
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name, description, declared binary (ffmpeg), package dependencies (openai-whisper, torch) and the included Python transcription code all align with a local Whisper STT tool. There are no unrelated credentials or config paths requested.
Instruction Scope
SKILL.md stays within the STT task, showing venv creation and pip installation. Two small inconsistencies: the README examples call ~/.clawdbot/skills/local-whisper/scripts/local-whisper but the repository provides scripts/transcribe.py (no wrapper named local-whisper included), and the instructions use the 'uv' command (uv venv, uv pip) but 'uv' is not listed in required binaries. Also note: models are downloaded at runtime by whisper.load_model(), so an initial internet connection is required to fetch model weights.
Install Mechanism
No install spec in the registry (instruction-only), so nothing is forced onto disk by the registry. SKILL.md recommends pip installing openai-whisper and torch (torch download uses the official PyTorch index URL). This is a standard approach; the user will execute these installs locally in a venv.
Credentials
The skill requests no environment variables or credentials. That is proportionate for a local transcription utility.
Persistence & Privilege
always is false and the skill does not request elevated or persistent platform-wide privileges. Runtime behavior will store downloaded model weights/cache on the host (normal for ML models) but the skill does not modify other skills or system-wide agent settings.
Assessment
This skill is internally coherent for local Whisper STT, but check a few things before installing: (1) The SKILL.md references a scripts/local-whisper wrapper but only transcribe.py is included — you may need to run the Python file directly or add a small launcher. (2) The instructions use the 'uv' helper tool but 'uv' is not declared as a required binary; ensure you understand or replace those commands (you can use python -m venv / pip). (3) Whisper will download model weights the first time you run it (large models are gigabytes) — that requires internet access and disk space; after download it runs offline. (4) Installing packages with pip runs arbitrary code from PyPI/torch index — install into an isolated venv and review packages if you have supply-chain concerns. (5) Audio data is processed locally, but if you later modify the code or install different packages, re-check network calls or endpoints. If those points are acceptable, the skill is consistent with its stated purpose.Like a lobster shell, security has layers — review code before you run it.
latestvk978572tbfh14h2h164sc9m6hs7zswan
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🎙️ Clawdis
Binsffmpeg
