Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aliyun Qwen Tts Voice Design

v1.0.0

Use when designing custom voices with Alibaba Cloud Model Studio Qwen TTS VD models. Use when creating custom synthetic voices from text descriptions and usi...

0· 0·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description match the included helper script and guidance to use Alibaba Cloud's Qwen TTS voice-design models. Asking users to install a library (dashscope) and supply an API key is proportionate to interacting with a cloud provider. However, the skill metadata declares no required environment variables or primary credential while the README explicitly requires DASHSCOPE_API_KEY (or a value in ~/.alibabacloud/credentials). That mismatch is an incoherence.
!
Instruction Scope
SKILL.md instructs the user to set DASHSCOPE_API_KEY and to run a helper script at skills/ai/audio/aliyun-qwen-tts-voice-design/scripts/prepare_voice_design_request.py, but the shipped script lives at scripts/prepare_voice_design_request.py. Output paths are inconsistent (default output listed as output/ai-audio-tts-voice-design/audio/ vs validation writing output/aliyun-qwen-tts-voice-design/validate.txt). The instructions otherwise stay within the expected scope (install package, create request JSON, run minimal read-only query), and the included script only writes/validates JSON — it does not perform network calls or access other system secrets itself.
Install Mechanism
This is instruction-only (no install spec). SKILL.md asks to pip install dashscope into a venv, which is a reasonable approach for interacting with a provider SDK. Installing a third-party package is normal but introduces typical supply-chain risk; no direct download URLs or archive extraction are present in the package.
!
Credentials
The skill metadata claims 'no required env vars', yet the runtime instructions require DASHSCOPE_API_KEY or an entry in ~/.alibabacloud/credentials. Requesting a cloud SDK API key is expected for this purpose, but it should be declared in the skill manifest. The README also asks to include region/resource IDs/time ranges in evidence files — those identifiers may be sensitive and should be handled intentionally. No other unrelated credentials are requested.
Persistence & Privilege
The skill does not request always:true, does not install system-wide changes, and does not modify other skills. It writes output artifacts under local output/ folders as described — normal behavior for an operation-focused skill.
What to consider before installing
This skill appears to implement what it advertises (voice design for Alibaba Cloud Qwen TTS) but has sloppy packaging and metadata mismatches you should resolve before trusting it. Specifically: - The README requires DASHSCOPE_API_KEY (or a ~/.alibabacloud/credentials entry) but the skill manifest lists no required env vars; assume the skill needs an Alibaba Cloud API key and only provide least-privileged credentials. - The SKILL.md references a helper script path that does not match the shipped file layout and shows inconsistent output paths; verify and correct the paths before running automated agents to avoid failing commands or writing files to unexpected locations. - The skill asks you to pip install the third-party package dashscope. Inspect that package (or install it in an isolated virtual environment / sandbox) before use to reduce supply-chain risk. - Review any evidence files you generate for sensitive identifiers (region, resource IDs, timestamps) before storing/sharing them. Recommended actions: ask the author to update the skill metadata to declare required env vars (DASHSCOPE_API_KEY), fix the script paths and output-folder references in SKILL.md, and provide a minimal example run that you can audit. If you proceed, test in an isolated environment with a read-only minimal credential first.

Like a lobster shell, security has layers — review code before you run it.

latestvk9701bcz1z6jsj3xtv0qq3kh31840ept

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Category: provider

Model Studio Qwen TTS Voice Design

Use voice design models to create controllable synthetic voices from natural language descriptions.

Critical model names

Use one of these exact model strings:

  • qwen3-tts-vd-2026-01-26
  • qwen3-tts-vd-realtime-2026-01-15

Prerequisites

  • Install SDK in a virtual environment:
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
  • Set DASHSCOPE_API_KEY in your environment, or add dashscope_api_key to ~/.alibabacloud/credentials.

Normalized interface (tts.voice_design)

Request

  • voice_prompt (string, required) target voice description
  • text (string, required)
  • stream (bool, optional)

Response

  • audio_url (string) or streaming PCM chunks
  • voice_id (string)
  • request_id (string)

Operational guidance

  • Write voice prompts with tone, pace, emotion, and timbre constraints.
  • Build a reusable voice prompt library for product consistency.
  • Validate generated voice in short utterances before long scripts.

Local helper script

Prepare a normalized request JSON and validate response schema:

.venv/bin/python skills/ai/audio/aliyun-qwen-tts-voice-design/scripts/prepare_voice_design_request.py \
  --voice-prompt "A warm female host voice, clear articulation, medium pace" \
  --text "This is a voice-design demo"

Output location

  • Default output: output/ai-audio-tts-voice-design/audio/
  • Override base dir with OUTPUT_DIR.

Validation

mkdir -p output/aliyun-qwen-tts-voice-design
for f in skills/ai/audio/aliyun-qwen-tts-voice-design/scripts/*.py; do
  python3 -m py_compile "$f"
done
echo "py_compile_ok" > output/aliyun-qwen-tts-voice-design/validate.txt

Pass criteria: command exits 0 and output/aliyun-qwen-tts-voice-design/validate.txt is generated.

Output And Evidence

  • Save artifacts, command outputs, and API response summaries under output/aliyun-qwen-tts-voice-design/.
  • Include key parameters (region/resource id/time range) in evidence files for reproducibility.

Workflow

  1. Confirm user intent, region, identifiers, and whether the operation is read-only or mutating.
  2. Run one minimal read-only query first to verify connectivity and permissions.
  3. Execute the target operation with explicit parameters and bounded scope.
  4. Verify results and save output/evidence files.

References

  • references/sources.md

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…