Faster Whisper Local Service

v0.2.0

OpenClaw local speech-to-text backend using faster-whisper over HTTP on 127.0.0.1:18790. Use when you want voice transcription without external APIs, without...

0· 1.2k·10 current·10 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the behavior: the scripts create a venv, install faster-whisper, write a local HTTP server, and register a per-user systemd service. All created files and env vars are clearly for configuring a local STT backend.
Instruction Scope
SKILL.md and scripts only perform actions needed for a local transcription service: check for python and gst-launch, create venv, pip-install faster-whisper, write server script and systemd unit, and run the service. The server enforces upload limits, magic-byte checks, and binds to 127.0.0.1. It does spawn gst-launch for audio conversion (via subprocess with argument list), which is expected for media handling and is explicitly called out and constrained.
Install Mechanism
No registry install spec, but deploy.sh pip-installs faster-whisper from PyPI into a local venv (moderate risk but expected). Model weights are downloaded from Hugging Face on first run (large files, requires Internet). No downloads from untrusted custom URLs or URL shorteners.
Credentials
The skill does not require secrets or unrelated environment variables. Environment variables present are configuration knobs (port, model size, device, allowed origin, max upload) and are proportional to the stated function.
Persistence & Privilege
The skill creates a per-user systemd service (~~/.config/systemd/user/...) and files under a configurable workspace in the user's home. It does not request always:true, system-wide privileges, or modify other skills' configurations.
Assessment
This appears to be a legitimate local transcription installer. Before installing, consider: (1) the deploy script will pip-install faster-whisper into a user venv and create a systemd user service (it runs with your user privileges); (2) on first run faster-whisper will download large model weights from Hugging Face — ensure you want that network activity and disk use; (3) the service uses gst-launch-1.0 (OS package) for audio conversion — keep GStreamer updated; (4) review transcribe-server.py (included) if you want to audit behavior yourself; and (5) if you need stronger isolation, run the service in a dedicated user account, container, or VM. If you do not trust the faster-whisper package source, consider inspecting or pinning dependencies before installing.

Like a lobster shell, security has layers — review code before you run it.

cpuvk97ardfhc590c38y2wnpazr7y981pa0dfaster-whispervk97ardfhc590c38y2wnpazr7y981pa0dfreevk973zbvz72bdhxq3fg1ex6vmvx8194melatestvk97d7f6q3r7tdb7rc515fhytr582b5qqlocalvk97ardfhc590c38y2wnpazr7y981pa0dno-apivk97ardfhc590c38y2wnpazr7y981pa0dofflinevk97ardfhc590c38y2wnpazr7y981pa0dopenclawvk97ardfhc590c38y2wnpazr7y981pa0dsecurevk97ardfhc590c38y2wnpazr7y981pa0dservicevk97ardfhc590c38y2wnpazr7y981pa0dsttvk97ardfhc590c38y2wnpazr7y981pa0dtranscriptionvk97ardfhc590c38y2wnpazr7y981pa0dvoicevk97ardfhc590c38y2wnpazr7y981pa0d

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Faster Whisper Local Service

Provision a local STT backend used by voice skills.

What this sets up

  • Python venv for faster-whisper
  • transcribe-server.py HTTP endpoint at http://127.0.0.1:18790/transcribe
  • systemd user service: openclaw-transcribe.service

Important: Model download on first run

On first startup, faster-whisper downloads model weights from Hugging Face (~1.5 GB for medium). This requires internet access and disk space. After the initial download, models are cached locally and the service runs fully offline.

ModelDownload sizeRAM usage
tiny~75 MB~400 MB
base~150 MB~500 MB
small~500 MB~800 MB
medium~1.5 GB~1.4 GB
large-v3~3.0 GB~3.5 GB

To pre-download models in an air-gapped environment, see faster-whisper docs.

Security notes

Network isolation

  • Binds to 127.0.0.1 only — not reachable from the network.
  • CORS restricted to a single origin (https://127.0.0.1:8443 by default).
  • No credentials, API keys, or secrets are used or stored.

Input validation

  • Upload size limit: Requests exceeding the configured limit are rejected before processing (HTTP 413). Default: 50 MB, configurable via MAX_UPLOAD_MB.
  • Magic-byte check: Only files with recognized audio signatures (WAV, OGG, FLAC, MP3, WebM, M4A) are accepted. Unrecognized formats are rejected (HTTP 415) before reaching GStreamer.
  • Subprocess safety: All arguments to gst-launch-1.0 are passed as a list — no shell expansion or injection is possible.

GStreamer dependency

The service uses GStreamer's decodebin for audio format conversion. Like any media library, GStreamer's parsers process binary data and should be kept up to date. Mitigation: install gst-launch-1.0 from your OS vendor's trusted packages and apply security updates regularly. The magic-byte pre-filter above reduces the attack surface by rejecting non-audio payloads before they reach GStreamer.

No data exfiltration

  • No outbound network calls (after initial model download).
  • No telemetry, analytics, or phone-home behavior.
  • Temporary files are created in a per-request TemporaryDirectory and cleaned up immediately.

Reproducibility defaults

  • Pinned package: faster-whisper==1.1.1 (override via env)
  • Explicit dependency check for gst-launch-1.0
  • CORS restricted to one origin by default
  • Configurable workspace/service paths (no hardcoded user path)

Deploy

bash scripts/deploy.sh

With custom settings:

WORKSPACE=~/.openclaw/workspace \
TRANSCRIBE_PORT=18790 \
WHISPER_MODEL_SIZE=medium \
WHISPER_LANGUAGE=auto \
TRANSCRIBE_ALLOWED_ORIGIN=https://10.0.0.42:8443 \
bash scripts/deploy.sh

Language setting

Default: auto (auto-detect language). Set WHISPER_LANGUAGE=de for German-only, en for English-only, etc. Fixed language is faster and more accurate if you only use one language.

Idempotent: safe to run repeatedly.

What this skill modifies

WhatPathAction
Python venv$WORKSPACE/.venv-faster-whisper/Creates venv, installs faster-whisper via pip
Transcribe server$WORKSPACE/voice-input/transcribe-server.pyWrites server script
Systemd service~/.config/systemd/user/openclaw-transcribe.serviceCreates + enables persistent service
Model cache~/.cache/huggingface/Downloads model weights on first run

Uninstall

systemctl --user stop openclaw-transcribe.service
systemctl --user disable openclaw-transcribe.service
rm -f ~/.config/systemd/user/openclaw-transcribe.service
systemctl --user daemon-reload

Optional full cleanup:

rm -rf ~/.openclaw/workspace/.venv-faster-whisper
rm -f ~/.openclaw/workspace/voice-input/transcribe-server.py

Verify

bash scripts/status.sh

Expected:

  • service active
  • endpoint responds (HTTP 200/500 acceptable for invalid sample payload)

Notes

  • This skill provides backend transcription only.
  • Pair with webchat-voice-proxy for browser mic + HTTPS/WSS integration.
  • For one-step install, use webchat-voice-full-stack (deploys backend + proxy in order).

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…