MJ Windows Faster Whisper
v1.0.0Local speech-to-text with the faster-whisper backend (CTranslate2). Use when transcribing audio locally, setting up the faster-whisper model cache, or replac...
Faster Whisper
Overview
Use faster-whisper for local transcription with low latency and a reusable model cache.
Rules
- Do not assume
ggmlmodels work here;faster-whisperuses CTranslate2 model folders. - Prefer CPU
device='cpu'andcompute_type='int8'unless the machine is explicitly configured for GPU. - Keep output plain text unless the user asks for timestamps or captions.
Setup
- Confirm
pythonandffmpegare available. - Install the Python packages needed for local inference:
faster-whisperctranslate2huggingface_hub
- Use the project repo
https://github.com/SYSTRAN/faster-whisperfor install/setup guidance. - Download
Systran/faster-whisper-smallfromhttps://huggingface.co/Systran/faster-whisper-smallinto a stable local folder such as:C:\Users\joshu\.openclaw\tools\faster-whisper\models\Systran-faster-whisper-small
- Reuse that folder for repeat runs.
- If the user only has a
ggml-*.binfile, explain that it belongs to whisper.cpp and is not usable here.
Transcription
- Convert Telegram OGG/Opus audio to WAV if needed.
- Load the local model folder.
- Transcribe and return the plain-text result.
Version tags
latest
