Jarvis Vocal
v1.0.0Authentic J.A.R.V.I.S. voice synthesis using Piper TTS with HuggingFace-trained model. Generates movie-accurate voice locally and can push to connected Andro...
⭐ 0· 67·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description (J.A.R.V.I.S. voice via Piper TTS and HuggingFace model) match the instructions and package metadata: the SKILL.md and README show how to install Piper, download the model, generate WAVs, and push them to Android devices via ADB/Tailscale. No unrelated binaries, env vars, or config paths are required by the skill itself.
Instruction Scope
Instructions are narrowly scoped to: installing piper-tts/ffmpeg, using the HuggingFace CLI to download model files into ~/.local/share/piper/voices, generating audio, and optionally streaming/pushing via adb. They do not instruct reading unrelated system files or exporting arbitrary data. Note: using ADB/Tailscale gives device-level access to paired Android devices (expected for the advertised capability); README contains a sample IP which is just illustrative but could be misleading if copied without understanding.
Install Mechanism
This is an instruction-only skill (no automated install). It tells the user to pipx install piper-tts and to use the HuggingFace CLI and ffmpeg. Those are reasonable for TTS but do require installing third-party software that will run locally—verify the upstream packages (piper-tts, hf CLI) before installing. Because install is manual, the skill itself does not download or execute code automatically.
Credentials
The skill declares no required environment variables or credentials. Operationally, the workflow may prompt for HuggingFace auth if model access requires it and requires an ADB-paired Android device (device pairing grants access to push/play files). There are no unrelated secret requests in the manifest or instructions.
Persistence & Privilege
The skill is not force-included (always: false) and has no install step that persists code automatically. It does not request elevated agent privileges or modify other skills. Note that the platform default allows autonomous invocation; that is expected and not by itself a concern here.
Assessment
This skill appears coherent, but before installing or running anything: 1) Inspect and verify the upstream packages it asks you to install (piper-tts, HuggingFace CLI, ffmpeg) and install them manually via trusted channels. 2) Confirm the HuggingFace model license and provenance — README claims movie-line training (legal/ethical risk, even if model metadata says MIT). 3) Understand ADB/Tailscale implications — pairing a phone gives the host permission to push files and trigger playback; only pair with devices you trust. 4) Review any wrapper scripts (jarvis-speak/jarvis-tts) before running to ensure they do only generation/push/cleanup and contain no unexpected commands. 5) Because installation is manual, nothing in the skill will automatically exfiltrate credentials, but installed third-party tools will run on your machine—treat them as you would any pip-installed software.Like a lobster shell, security has layers — review code before you run it.
jarvisvk97eqr6j7np4nrx64b5n4j983n84pt9hlatestvk97eqr6j7np4nrx64b5n4j983n84pt9hpipervk97eqr6j7np4nrx64b5n4j983n84pt9httsvk97eqr6j7np4nrx64b5n4j983n84pt9hvoicevk97eqr6j7np4nrx64b5n4j983n84pt9h
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🎙️ Clawdis
