Back to skill
Skillv1.1.0

ClawScan security

Truly Local Piper Multilang TTS (secure) · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignFeb 23, 2026, 9:32 PM
Verdict
benign
Confidence
high
Model
gpt-5-mini
Summary
The skill's code, instructions, and requirements are coherent with its stated purpose (local Piper TTS); it asks no credentials and only requires Python/ffmpeg and optional voice downloads from HTTPS sources.
Guidance
This skill is internally consistent for providing local Piper-based TTS: it will create a Python venv inside the skill folder, install piper-tts from PyPI, and download voice model files over HTTPS (one-time or on-demand). Things to consider before installing: 1) Trust and provenance — the skill owner/homepage is not provided; verify you trust the package code and the piper-tts PyPI package it installs. 2) Network and disk — voice models are large (~65 MB each) and downloads occur over the network; confirm you want those downloads. 3) Supply-chain risk — pip will fetch third-party packages into the venv; if you require higher assurance, inspect piper-tts or run in an isolated environment. 4) espeak-ng is a system dependency for phonemization and must be installed separately if desired. The skill will ask for your confirmation before setup and downloads; only proceed if you consent to those operations.

Review Dimensions

Purpose & Capability
okName/description (local Piper TTS) align with required binaries (python3, ffmpeg) and the provided code. The skill creates a venv, runs piper-tts, stores models and outputs in the skill directory — all expected for an offline TTS skill. Note: the package's source/owner has no public homepage listed, which affects trust but not functional coherence.
Instruction Scope
okSKILL.md and the code confine actions to the skill directory and the OpenClaw workspace: create venv, install piper-tts into venv, download voice model files, generate audio, save config.json. The runtime instructions require explicit user confirmation before setup and before downloading models. There are no instructions to read unrelated system files or to exfiltrate data; TTS runtime synthesis runs locally and the script enforces model/path bounds.
Install Mechanism
noteThere is no platform install spec; the skill's setup() uses pip to install piper-tts and pathvalidate into an isolated venv inside the skill directory. This is expected for this functionality but implies network activity and executing third-party Python packages from PyPI (supply-chain risk). Voice models are downloaded over HTTPS and can be ~65 MB each; index.js enforces HTTPS, redirects handling, and atomic writes, which lowers risk.
Credentials
okThe skill requests no credentials or secrets. It uses standard environment values (HOME / OPENCLAW_WORKSPACE) and sets PIPER_VOICE_MODEL / PIPER_LENGTH_SCALE internally for subprocess runs. No unrelated environment variables or config paths are requested.
Persistence & Privilege
okalways:false (no forced global inclusion). The skill persists a venv, downloaded models, and a config.json inside its own skill directory — expected for this functionality and scoped to the skill. It does not modify other skills or system-wide settings.