Qwen Audio
v0.0.6High-performance audio library with text-to-speech (TTS) and speech-to-text (STT).
⭐ 1· 345·1 current·1 all-time
bynoah@darknoah
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description (TTS/STT) matches the included code and pyproject dependencies (qwen-asr, qwen-tts, mlx-audio, torch). However the SKILL.md and registry metadata claim no required binaries/env vars while the instructions and code rely on the 'uv' CLI, Python >=3.10, and may require network access to download large models. The overall capability is coherent with its stated purpose but some required runtime pieces are not declared in the metadata.
Instruction Scope
Runtime instructions tell the agent to run 'uv run ...' and to manipulate a local ./voices/ directory; the code will read and write these local voice files. Instructions require the user to run env-checks and to explicitly confirm voice selection before TTS, which limits accidental use. The SKILL.md does not explicitly warn that model downloads and package installs will occur, but the code will contact Hugging Face and other endpoints and can operate in online/offline modes.
Install Mechanism
There is no platform install spec (instruction-only), but the pyproject.toml lists heavy ML dependencies and a custom torch index. The script itself will run a shell command (os.system("uv add mlx-audio ...")) to install missing packages at runtime. Auto-install and model downloads introduce moderate risk (large network/disk operations and execution of runtime-installed packages).
Credentials
The skill declares no required environment variables, but the code reads/uses QWEN_AUDIO_DEVICE, QWEN_AUDIO_DTYPE, HF_ENDPOINT and may set HF_HUB_OFFLINE. No secret or credential env vars are requested. The mismatch between declared requirements and actual env usage reduces transparency and should be resolved before trusting the skill.
Persistence & Privilege
always is false and the skill does not request system-wide config changes or other skills' credentials. It will write voice profiles under its own ./voices/ directory and may create/update files like references/env-check-list.md as instructed, which is normal for a local audio skill.
What to consider before installing
This skill implements TTS/STT and largely does what it says, but take these precautions before installing or letting an agent run it:
- Run it in an isolated environment (VM/container) because it will download and install heavy ML packages and models (torch, qwen-tts/asr, etc.), which use significant disk, memory, and network.
- Ensure you have the 'uv' CLI and Python 3.10+ available — the SKILL.md uses 'uv run' but the registry metadata does not list 'uv' as a required binary.
- Expect network access to Hugging Face and other endpoints (the code probes HF_ENDPOINT and can download models). If you need to avoid external network traffic, do not install or run the skill.
- The script may auto-install missing Python packages via os.system('uv add ...') — this is a legitimate convenience but increases runtime privilege and attack surface. Review the pyproject.toml and the packages it will pull before proceeding.
- Voices and other files are stored under ./voices/ and the skill will write to the skill folder; consider filesystem permissions and where you run it.
- No credentials are requested, but environment variables (QWEN_AUDIO_DEVICE, QWEN_AUDIO_DTYPE, HF_ENDPOINT) influence behavior; these are not declared in the metadata and should be documented or locked down.
If you need lower risk, ask the author to (1) declare required binaries and env vars explicitly, (2) remove runtime auto-installs or make them opt-in, and (3) document model download endpoints and disk requirements. Review the full scripts/qwen-audio.py before granting the skill autonomous invocation.Like a lobster shell, security has layers — review code before you run it.
latestvk977017ffhh34jet63cc4zgwcx82fg5m
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
