Voice Assistant

v0.1.0

Real-time voice assistant for OpenClaw. Streams mic audio through configurable STT (Deepgram or ElevenLabs) into your OpenClaw agent, then speaks the response via configurable TTS (Deepgram Aura or ElevenLabs). Sub-2s time-to-first-audio with full streaming at every stage.

4· 1.9k·15 current·17 all-time
byCharan Tej Mandali@charantejmandali18

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for charantejmandali18/voice-assistant.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Voice Assistant" (charantejmandali18/voice-assistant) from ClawHub.
Skill page: https://clawhub.ai/charantejmandali18/voice-assistant
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: uv
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install voice-assistant

ClawHub CLI

Package manager switcher

npx clawhub@latest install voice-assistant
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The code and SKILL.md implement a real-time STT→LLM→TTS voice pipeline (Deepgram/ElevenLabs + OpenClaw gateway), which matches the name/description. However the registry metadata is inconsistent: it declares no required env vars and lists VOICE_STT_PROVIDER as the primary credential, but the server actually expects and uses sensitive API keys (DEEPGRAM_API_KEY, ELEVENLABS_API_KEY) plus OPENCLAW_GATEWAY_URL/OPENCLAW_MODEL. The primaryEnv should point at a secret like DEEPGRAM_API_KEY/ELEVENLABS_API_KEY (not the provider selector). This mismatch is disproportionate and confusing.
Instruction Scope
SKILL.md provides concrete runtime instructions (copy .env.example to .env, fill in API keys, run uv run scripts/server.py, open browser). The runtime instructions and server code only reference expected files (.env) and the OpenClaw gateway; they stream microphone audio to configured STT/TTS providers and the OpenClaw gateway as described. There are no instructions to read unrelated system files or exfiltrate secrets beyond the STT/TTS and gateway endpoints.
Install Mechanism
Install spec is a single brew formula 'uv' which is a standard package-manager install path (lower risk). The skill includes Python code and a pyproject.toml declaring normal Python dependencies (fastapi, uvicorn, httpx, websockets). No arbitrary downloads, URL shorteners, or extracted remote archives are present in the provided install spec.
!
Credentials
The skill requires multiple sensitive environment variables at runtime (DEEPGRAM_API_KEY, ELEVENLABS_API_KEY, OPENCLAW_GATEWAY_URL, OPENCLAW_MODEL) but the registry metadata lists no required env vars and sets primaryEnv to VOICE_STT_PROVIDER (a non-secret). This is misleading: users will need to supply API keys for third-party STT/TTS providers and a gateway URL, but the manifest does not declare them. Requesting multiple third-party API keys is reasonable for a voice skill, but the metadata/manifest should reflect that clearly.
Persistence & Privilege
The skill does not request always:true and does not modify other skills or system-wide settings. It runs as a local server and uses normal network connections to STT/TTS providers and the OpenClaw gateway. Autonomous invocation remains possible (platform default) but is not combined with unusual privileges here.
What to consider before installing
This package implements the described voice pipeline and will stream your microphone audio and transcripts to third-party STT/TTS services (Deepgram and/or ElevenLabs) and to whatever OpenClaw gateway URL you provide. Before installing: 1) Be aware you must supply API keys (DEEPGRAM_API_KEY and/or ELEVENLABS_API_KEY) and your OPENCLAW_GATEWAY_URL/OPENCLAW_MODEL — the registry metadata does NOT list these, so the manifest is misleading. 2) Only install if you trust the skill author and the third-party providers; audio and transcripts will leave your machine. 3) Inspect scripts/server.py locally (already included) and run it in a limited environment (local machine or sandbox) before granting broader access. 4) If you don’t want to expose real data, test with dummy keys and a local gateway first. 5) Consider updating the manifest to correctly declare required secrets (primaryEnv should reference the actual API key variable) or ask the publisher for clarification.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎙️ Clawdis
Binsuv
Primary envVOICE_STT_PROVIDER

Install

Install uv (brew)
Bins: uv
brew install uv
deepgramvk972s8bqmgrjxhkq3gd19q8tk580q8bnelevenlabsvk972s8bqmgrjxhkq3gd19q8tk580q8bnlatestvk972s8bqmgrjxhkq3gd19q8tk580q8bnrealtimevk972s8bqmgrjxhkq3gd19q8tk580q8bnspeechvk972s8bqmgrjxhkq3gd19q8tk580q8bnvoicevk972s8bqmgrjxhkq3gd19q8tk580q8bn
1.9kdownloads
4stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

Voice Assistant

Real-time voice interface for your OpenClaw agent. Talk to your agent and hear it respond — with configurable STT and TTS providers, full streaming at every stage, and sub-2 second time-to-first-audio.

Architecture

Browser Mic → WebSocket → STT (Deepgram / ElevenLabs) → Text
  → OpenClaw Gateway (/v1/chat/completions, streaming) → Response Text
  → TTS (Deepgram Aura / ElevenLabs) → Audio chunks → Browser Speaker

The voice interface connects to your running OpenClaw gateway's OpenAI-compatible endpoint. It's the same agent with all its context, tools, and memory — just with a voice.

Quick Start

cd {baseDir}
cp .env.example .env
# Fill in your API keys and gateway URL
uv run scripts/server.py
# Open http://localhost:7860 and click the mic

Supported Providers

STT (Speech-to-Text)

ProviderModelLatencyNotes
Deepgramnova-2 (streaming)~200-300msWebSocket streaming, best accuracy/speed
ElevenLabsScribe v1~300-500msREST-based, good multilingual

TTS (Text-to-Speech)

ProviderModelLatencyNotes
Deepgramaura-2~200msWebSocket streaming, low cost
ElevenLabsTurbo v2.5~300msBest voice quality, streaming

Configuration

All configuration is via environment variables in .env:

# === Required ===
OPENCLAW_GATEWAY_URL=http://localhost:4141/v1    # Your OpenClaw gateway
OPENCLAW_MODEL=claude-sonnet-4-5-20250929        # Model your gateway routes to

# === STT Provider (pick one) ===
VOICE_STT_PROVIDER=deepgram                      # "deepgram" or "elevenlabs"
DEEPGRAM_API_KEY=your-key-here                   # Required if STT=deepgram
ELEVENLABS_API_KEY=your-key-here                 # Required if STT=elevenlabs

# === TTS Provider (pick one) ===
VOICE_TTS_PROVIDER=elevenlabs                    # "deepgram" or "elevenlabs"
# Uses the same API keys as above

# === Optional Tuning ===
VOICE_TTS_VOICE=rachel                           # ElevenLabs voice name/ID
VOICE_TTS_VOICE_DG=aura-2-theia-en              # Deepgram Aura voice
VOICE_VAD_SILENCE_MS=400                         # Silence before end-of-turn (ms)
VOICE_SAMPLE_RATE=16000                          # Audio sample rate
VOICE_SERVER_PORT=7860                           # Server port
VOICE_SYSTEM_PROMPT=""                           # Optional system prompt override

Provider Combinations

SetupBest For
Deepgram STT + ElevenLabs TTSBest quality voice output
Deepgram STT + Deepgram TTSLowest latency, single vendor
ElevenLabs STT + ElevenLabs TTSBest multilingual support

How It Works

  1. Browser captures mic audio via Web Audio API and streams raw PCM over a WebSocket
  2. Server receives audio and pipes it to the configured STT provider's streaming endpoint
  3. STT returns partial transcripts in real-time; on end-of-utterance the full text is sent to the OpenClaw gateway
  4. OpenClaw gateway streams the LLM response token-by-token via SSE (Server-Sent Events)
  5. Tokens are accumulated into sentence-sized chunks and streamed to the TTS provider
  6. TTS returns audio chunks that are immediately forwarded to the browser over the same WebSocket
  7. Browser plays audio using the Web Audio API with a jitter buffer for smooth playback

Interruption Handling (Barge-In)

When the user starts speaking while the agent is still talking:

  • Current TTS audio is immediately cancelled
  • The agent stops its current response
  • New STT session begins capturing the user's interruption

Usage Examples

User: "Hey, set up my voice assistant"
→ OpenClaw runs: cd {baseDir} && cp .env.example .env
→ Opens .env for the user to fill in API keys
→ Runs: uv run scripts/server.py

User: "Start a voice chat"
→ Opens http://localhost:7860 in the browser

User: "Switch TTS to Deepgram"
→ Updates VOICE_TTS_PROVIDER=deepgram in .env
→ Restarts the server

Troubleshooting

  • No audio output? Check that your TTS API key is valid and the provider is set correctly
  • High latency? Use Deepgram for both STT and TTS; ensure your gateway is on the same network
  • Cuts off speech? Increase VOICE_VAD_SILENCE_MS to 600-800ms
  • Echo/feedback? Use headphones, or enable the built-in echo cancellation in the browser UI

Latency Budget

StageTargetActual (typical)
Audio capture + VAD<200ms~100-150ms
STT transcription<400ms~200-400ms
OpenClaw LLM first token<1500ms~500-1500ms
TTS first audio chunk<400ms~200-400ms
Total first audio<2.5s~1.0-2.5s

Comments

Loading comments...