Local Voice (FluidAudio TTS/STT)

v1.0.1

Local text-to-speech (TTS) and speech-to-text (STT) using FluidAudio on Apple Silicon. Sub-second voice synthesis and transcription running entirely on-device via the Apple Neural Engine. Use when setting up local voice capabilities, voice assistant integration, or replacing cloud TTS/STT services.

1· 1.6k·2 current·2 all-time
byTrond Wuellner@trondw

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for trondw/local-voice.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Local Voice (FluidAudio TTS/STT)" (trondw/local-voice) from ClawHub.
Skill page: https://clawhub.ai/trondw/local-voice
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install local-voice

ClawHub CLI

Package manager switcher

npx clawhub@latest install local-voice
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description, source files, and dependencies (FluidAudio, Hummingbird) align with a local TTS/STT daemon for Apple Silicon. However the SKILL.md repeatedly claims “100% local / no cloud,” while the STT code calls AsrModels.downloadAndLoad(version: .v3) at runtime — implying models may be fetched from the network. That contradiction is important for privacy/offline guarantees.
Instruction Scope
Runtime instructions focus on building, installing, and running a local daemon and include example curl/JS integration; they do not request unrelated files or credentials. Issues: mismatched LaunchAgent names and references (SKILL.md shows com.stella.tts/plist but scripts/setup.sh creates and loads com.stella.voice.plist; helper script references com.stella.tts), which could confuse users and lead to accidental misconfiguration. The instructions create a persistent user LaunchAgent with KeepAlive=true and place binaries/logs under user home directories.
Install Mechanism
No packaged install in the registry (instruction-only) and the provided source builds from a Swift Package that pulls FluidAudio and Hummingbird from GitHub — this is expected for a compiled Swift daemon. There are no arbitrary URL downloads in the repo itself. Note: runtime model downloading (AsrModels.downloadAndLoad) is performed by the library at startup and is not part of the registry install spec.
Credentials
The skill declares no required env vars or credentials and the code does not request secrets. The LaunchAgent sets HOME in EnvironmentVariables (setup script). No unexplained credential or config access is requested.
!
Persistence & Privilege
The setup creates a user LaunchAgent (keeps the daemon running, RunAtLoad + KeepAlive) and copies a binary into ~/clawd/bin, so the service will persist across logins. While not an OS-level privileged install, persistent background services increase blast radius (especially combined with runtime model downloads). The skill is not marked always:true in the registry, but the service will auto-start on the user account.
What to consider before installing
This package appears to be a legitimate local TTS/STT daemon, but check these before installing: 1) Offline claim: verify whether AsrModels.downloadAndLoad and KokoroTtsManager.initialize fetch models from the network and which hosts they contact — if you need truly offline operation, test in an isolated network or inspect FluidAudio sources. 2) LaunchAgent mismatch: SKILL.md, setup.sh, and the helper script use different plist names (com.stella.tts vs com.stella.voice); decide which to use and inspect the plist before loading. 3) Persistent service: the setup creates a KeepAlive LaunchAgent and log files under your home directory; ensure you’re comfortable with a background process auto-starting. 4) Build from source and inspect the FluidAudio package sources (or vendor models) to confirm model origin and license. 5) If privacy is a concern, run the setup in a controlled environment (VM or isolated account) and monitor outbound network connections during first startup to confirm no unexpected exfiltration. If you want, I can list the exact places in the FluidAudio package where model download endpoints are defined (you would need to provide or point me at that repository URL).

Like a lobster shell, security has layers — review code before you run it.

latestvk973spc4aewe5908h8xnh22kyn80rrva
1.6kdownloads
1stars
2versions
Updated 1mo ago
v1.0.1
MIT-0

Local Voice (FluidAudio TTS/STT)

Sub-second local voice AI for Apple Silicon Macs using FluidAudio's CoreML models.

Features

  • TTS: Kokoro model with 54 voices, ~0.6-0.8s latency
  • STT: Parakeet TDT v3, ~0.2-0.3s latency, 25 languages
  • 100% local: No cloud, no cost, works offline
  • Neural Engine: Runs on Apple's ANE for efficiency

Requirements

  • macOS 14+ on Apple Silicon (M1/M2/M3/M4)
  • Swift 5.9+
  • espeak-ng (for TTS phoneme fallback)

Quick Setup

1. Install Dependencies

brew install espeak-ng

2. Build the Daemon

cd /path/to/skill/sources
swift build -c release

3. Install Binary and Framework

mkdir -p ~/clawd/bin
cp .build/release/StellaVoice ~/clawd/bin/
cp -R .build/arm64-apple-macosx/release/ESpeakNG.framework ~/clawd/bin/
install_name_tool -add_rpath @executable_path ~/clawd/bin/StellaVoice

4. Create LaunchAgent

cat > ~/Library/LaunchAgents/com.stella.tts.plist << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.stella.tts</string>
    <key>ProgramArguments</key>
    <array>
        <string>$HOME/clawd/bin/StellaVoice</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>KeepAlive</key>
    <true/>
    <key>StandardOutPath</key>
    <string>$HOME/.clawdbot/logs/stella-tts.log</string>
    <key>StandardErrorPath</key>
    <string>$HOME/.clawdbot/logs/stella-tts.err.log</string>
</dict>
</plist>
EOF

launchctl load ~/Library/LaunchAgents/com.stella.tts.plist

API Endpoints

The daemon listens on http://127.0.0.1:18790:

TTS - Text to Speech

# Simple text to WAV
curl -X POST http://127.0.0.1:18790/synthesize -d "Hello world" -o output.wav

# With speed control (0.5-2.0)
curl -X POST "http://127.0.0.1:18790/synthesize?speed=1.2" -d "Fast!" -o output.wav

# JSON endpoint
curl -X POST http://127.0.0.1:18790/synthesize/json \
  -H "Content-Type: application/json" \
  -d '{"text": "Hello", "speed": 1.0, "deEss": true}'

STT - Speech to Text

curl -X POST http://127.0.0.1:18790/transcribe \
  --data-binary @audio.wav \
  -H "Content-Type: audio/wav"
# Returns: {"text": "transcribed text"}

Health Check

curl http://127.0.0.1:18790/health
# Returns: ok

Voice Options

Default voice is af_sky. Change by modifying the source code.

Top Kokoro voices (American female):

  • af_heart (A grade) - warm, natural
  • af_bella (A-) - expressive
  • af_sky (C-) - clear, light

All 54 voices: See references/VOICES.md

Expressiveness

Speed Control

  • speed=0.8 → Calm, relaxed
  • speed=1.0 → Natural pace
  • speed=1.2 → Energetic, upbeat

Punctuation (automatic)

  • ! → Excited tone
  • ? → Rising intonation
  • . → Neutral, falling
  • ... → Pauses

SSML Tags

<phoneme ph="kəkˈɔɹO">Kokoro</phoneme>
<sub alias="Doctor">Dr.</sub>
<say-as interpret-as="date">2024-01-15</say-as>

Helper Script

See scripts/stella-tts.sh for a convenient wrapper:

scripts/stella-tts.sh "Hello world" output.wav
scripts/stella-tts.sh "Hello world" output.mp3  # Auto-converts

Integration Example

For voice assistants, update your voice proxy to use local endpoints:

// STT
const response = await fetch('http://127.0.0.1:18790/transcribe', {
    method: 'POST',
    headers: { 'Content-Type': 'audio/wav' },
    body: audioData
});
const { text } = await response.json();

// TTS
const audio = await fetch('http://127.0.0.1:18790/synthesize', {
    method: 'POST',
    body: textToSpeak
});

Troubleshooting

Library not loaded (ESpeakNG)

  • Ensure ESpeakNG.framework is in the same directory as the binary
  • Run install_name_tool -add_rpath @executable_path /path/to/binary

Slow first request

  • First request loads models (~8-10s)
  • Subsequent requests are sub-second

x86 vs ARM

  • Must build and run on ARM64 native (not Rosetta)
  • Check with uname -m (should show arm64)

Source Code

The daemon source is in sources/ directory. It's a Swift package using:

  • FluidAudio (TTS + STT models)
  • Hummingbird (HTTP server)

Rebuild after modifying:

cd sources && swift build -c release

Comments

Loading comments...