Install
openclaw skills install jetson-cuda-voiceFully offline, CUDA-accelerated local voice assistant pipeline for NVIDIA Jetson. Wake word (openWakeWord) → real-time VAD → whisper.cpp GPU STT → LLM → Piper TTS. Includes dynamic ambient noise calibration, conversation history, and ReSpeaker LED feedback. Tested on Jetson Xavier NX (sm_72, JetPack 5.1.4) with ReSpeaker USB Mic Array.
openclaw skills install jetson-cuda-voiceFully offline, GPU-accelerated local voice assistant for NVIDIA Jetson devices. No cloud for STT or TTS — only the LLM call uses the internet (OpenRouter or any OpenAI-compatible endpoint).
ReSpeaker mic (hw:Array,0, S24_3LE, 16kHz)
↓ arecord raw stream — never restarted mid-conversation
openWakeWord — "Hey Jarvis" detection (~32ms chunks)
↓ wake word triggered → two-tone beep
_measure_ambient() — 480ms median RMS → dynamic VAD thresholds
↓
transcribe_stream() — VAD + whisper.cpp CUDA HTTP (~2-4s per utterance)
↓
ask_llm() — OpenRouter or local OpenAI-compatible API (~1-2s)
↓
Piper TTS — offline neural TTS, hot-loaded at startup → aplay
↓
ReSpeaker LEDs: 🔵 blue=listening 🩵 cyan=thinking ⚫ off=done 🔴 red=error
Total latency: ~5-8 seconds from wake word to first spoken word.
arecord pipe feeds wake word detection and STT-l auto, works multilingual| Component | Tested | Notes |
|---|---|---|
| Jetson Xavier NX | ✅ | ARM64, sm_72, 8GB, JetPack 5.1.4 |
| ReSpeaker USB Mic Array v1.0 | ✅ | 2886:0007, S24_3LE, 16kHz |
| Any ALSA speaker | ✅ | tested with Creative MUVO 2c |
| Other Jetson models | ✅ | change CMAKE_CUDA_ARCHITECTURES |
# 1. Install Python deps
pip install openwakeword piper-tts numpy requests pyusb
# 2. Build whisper.cpp with CUDA (see BUILD.md — ~45 min, one-time)
# Then place binary at ~/.local/bin/whisper-server-gpu
# 3. Download Piper voice model
mkdir -p ~/.local/share/piper/voices && cd ~/.local/share/piper/voices
wget https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx
wget https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx.json
# 4. Install and start services
export OPENROUTER_API_KEY=your-key-here
bash pipeline/setup.sh
bash pipeline/manage.sh start
# Say "Hey Jarvis" — blue LED = listening
See BUILD.md for full instructions. Critical flag:
cmake .. -DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES=72 -DCMAKE_BUILD_TYPE=Release
make -j4 # ~45 min — detach with nohup if needed
⚠️
CMAKE_CUDA_ARCHITECTURES=72(sm_72 = Xavier NX) is critical. Default multi-arch compilation OOMs on 8GB Jetson.
Architecture map:
72876253mkdir -p ~/.local/share/piper/voices && cd "$_"
# English (required)
wget https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx
wget https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx.json
# Greek (optional — any language from huggingface.co/rhasspy/piper-voices works)
wget https://huggingface.co/rhasspy/piper-voices/resolve/main/el/el_GR/rapunzelina/medium/el_GR-rapunzelina-medium.onnx
wget https://huggingface.co/rhasspy/piper-voices/resolve/main/el/el_GR/rapunzelina/medium/el_GR-rapunzelina-medium.onnx.json
setup.sh writes and enables the systemd user services automatically:
bash pipeline/setup.sh [/path/to/voice_pipeline.py] [API_KEY]
Or with env var:
OPENROUTER_API_KEY=sk-... bash pipeline/setup.sh
Re-run to update an existing install.
# Optimal gain (no clipping, RMS ~180 ambient)
amixer -c 0 set Mic 90
# Prevent USB autosuspend (mic sleeps after 2s idle without this)
sudo tee /etc/udev/rules.d/99-usb-audio-nosuspend.rules << 'EOF'
ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="2886", ATTR{idProduct}=="0007", \
ATTR{power/control}="on", ATTR{power/autosuspend}="-1"
EOF
sudo udevadm control --reload-rules
bash pipeline/manage.sh start # start both services
bash pipeline/manage.sh stop # stop both services
bash pipeline/manage.sh restart # restart both
bash pipeline/manage.sh status # systemd status
bash pipeline/manage.sh logs # tail live log
bash pipeline/manage.sh test-mic # record 4s + play back
bash pipeline/manage.sh test-stt # record 4s + transcribe
bash pipeline/manage.sh test-tts # speak a test phrase
| Variable | Default | Description |
|---|---|---|
OPENROUTER_API_KEY | (required) | API key for OpenRouter (or any OpenAI-compatible provider) |
VOICE_MIC | hw:Array,0 | ALSA mic device name |
VOICE_SPEAKER | hw:C2c,0 | ALSA speaker device name |
VOICE_LLM_URL | OpenRouter | LLM API endpoint |
VOICE_LLM_MODEL | anthropic/claude-3.5-haiku | Model name |
VOICE_WAKE_THRESHOLD | 0.5 | Wake word confidence (0.0–1.0) |
VOICE_SPEECH_RMS | 400 | Fallback speech RMS threshold |
VOICE_SILENCE_RMS | 250 | Fallback silence RMS threshold |
VOICE_UTC_OFFSET | 0 | Timezone offset hours for LLM context |
PIPER_VOICES_DIR | ~/.local/share/piper/voices | Piper voice models directory |
WHISPER_URL | http://127.0.0.1:8181/inference | whisper-server endpoint |
WHISPER_BIN | ~/.local/bin/whisper-server-gpu | whisper-server binary (used by setup.sh) |
WHISPER_MODEL | ~/.local/share/whisper/models/ggml-base.bin | Whisper model (used by setup.sh) |
Mic records silence
amixer -c 0 set Mic 90hw:Array,0 not hw:0,0) — numbers shift on rebootRecords full 6s timeout, never cuts off
VOICE_SILENCE_RMS fallback. Dynamic calibration handles this automatically.VOICE_SILENCE_RMS slightly above your measured ambient floor.[BEEPING] or (bell dings) in transcript
Whisper OOM during build
-DCMAKE_CUDA_ARCHITECTURES=72 — default multi-arch build exhausts 8GB RAM.-j4 not -j6.LED not lighting up
pip install pyusbWake word triggers constantly (false positives)
VOICE_WAKE_THRESHOLD to 0.7 or higher.jetson-cuda-voice/
├── SKILL.md ← this file
├── BUILD.md ← whisper.cpp CUDA build guide
└── pipeline/
├── voice_pipeline.py ← main pipeline
├── led.py ← ReSpeaker LED control (optional)
├── setup.sh ← one-command service installer
└── manage.sh ← start/stop/status/test