Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Local Llama TTS

v1.0.0

Local text-to-speech using llama-tts (llama.cpp) and OuteTTS-1.0-0.6B model.

0· 730·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description describe local TTS. The only required binary is 'llama-tts' and the included script invokes that binary with model and vocoder files — this is proportionate to the claimed purpose.
Instruction Scope
SKILL.md and the script only run the local 'llama-tts' binary and ask you to download models from Hugging Face. Minor notes: the recommended model/vocoder paths are hardcoded to /data/public/machine-learning/models/text-to-speach/, which may be a shared/global path; instructions include a different vocoder release (Q5_1) than the vocoder filename used in the script (Q4_0) but the SKILL.md comments this as an alternative. No instructions request unrelated files, credentials, or external endpoints beyond model download links.
Install Mechanism
No install spec — instruction-only plus a wrapper script. This is low-risk; nothing in the skill tries to fetch or execute code during install. Model downloads are documented but performed by the user (via Hugging Face links).
Credentials
The skill requests no environment variables or credentials. The resources referenced (local model and vocoder files, llama-tts binary) are relevant and necessary for local TTS.
Persistence & Privilege
The skill does not request always:true, does not modify other skills, and does not try to persist credentials. It is user-invocable and can be invoked autonomously by the agent (platform default) — nothing here elevates privilege beyond expected behavior.
Assessment
This skill appears to do what it says: run your local 'llama-tts' binary against local model files. Before installing or running it: 1) Verify the 'llama-tts' binary you use is from a trusted source and inspect its permissions; 2) Download model/vocoder files from the official Hugging Face pages and verify checksums/licensing; 3) Prefer placing models in a user-controlled directory rather than a global /data/public/... path to avoid accidental exposure or overwrites; 4) Be cautious about running any downloaded binary as root and review the binary's behavior if you plan to allow autonomous agent invocation. The script itself contains no network exfiltration or unrelated credential access.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔊 Clawdis
Binsllama-tts
latestvk97ctsxvnp3t99stazre83a4vh818k07
730downloads
0stars
1versions
Updated 6h ago
v1.0.0
MIT-0

Local Llama TTS

Synthesize speech locally using llama-tts and the OuteTTS-1.0-0.6B model.

Usage

You can use the wrapper script:

  • scripts/tts-local.sh [options] "<text>"

Options

  • -o, --output <file>: Output WAV file (default: output.wav)
  • -s, --speaker <file>: Speaker reference file (optional)
  • -t, --temp <value>: Temperature (default: 0.4)

Scripts

  • Location: scripts/tts-local.sh (inside skill folder)
  • Model: /data/public/machine-learning/models/text-to-speach/OuteTTS-1.0-0.6B-Q4_K_M.gguf
  • Vocoder: /data/public/machine-learning/models/text-to-speach/WavTokenizer-Large-75-Q4_0.gguf
  • GPU: Enabled via llama-tts.

Setup

  1. Model: Download from OuteAI/OuteTTS-1.0-0.6B-GGUF
  2. Vocoder: Download from ggml-org/WavTokenizer (Note: Felix uses a Q4_0 version, Q5_1 is linked here as a high-quality alternative).

Place files in /data/public/machine-learning/models/text-to-speach/ or update scripts/tts-local.sh.

Sampling Configuration

The model card recommends the following settings (hardcoded in the script):

  • Temperature: 0.4
  • Repetition Penalty: 1.1
  • Repetition Range: 64
  • Top-k: 40
  • Top-p: 0.9
  • Min-p: 0.05

Comments

Loading comments...