Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aliyun Qwen Tts Realtime

v1.0.0

Use when real-time speech synthesis is needed with Alibaba Cloud Model Studio Qwen TTS Realtime models. Use when low-latency interactive speech is required,...

0· 3·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
Name, description, SKILL.md, and the included script consistently implement Alibaba Cloud Qwen realtime TTS via the dashscope SDK, which is coherent with the stated purpose. However the registry metadata lists no required environment variables or primary credential while both SKILL.md and the script require DASHSCOPE_API_KEY (or dashscope_api_key in ~/.alibabacloud/credentials). That metadata/instruction mismatch is a notable coherence problem.
Instruction Scope
Runtime instructions and the script are focused on probing realtime TTS and falling back to a non-realtime model. The script will: load .env from the current working directory and from the repo root (if a .git is present), read ~/.alibabacloud/credentials for dashscope_api_key, call dashscope.MultiModalConversation (streaming or non-streaming), and download audio URLs returned by the service. Those actions are expected for this demo but do involve reading local .env/credentials and performing network requests and file writes.
Install Mechanism
No install spec is embedded. SKILL.md asks the user to create a venv and pip install dashscope. This is a normal, low-risk install pattern; there are no embedded downloads or unknown URLs in an install script.
!
Credentials
The skill requires a single API key (DASHSCOPE_API_KEY) in practice, which is proportionate to calling the dashscope API. However the registry metadata does not declare this required environment variable or primary credential, and the script also loads plaintext .env and ~/.alibabacloud/credentials. The omission in declared requirements reduces transparency and is a security usability concern.
Persistence & Privilege
The skill is not always-enabled, does not request elevated platform privileges, and does not modify other skills’ configs. It runs as an on-demand demo script and writes outputs to a local output/ directory as documented.
What to consider before installing
This appears to be a legitimate Alibaba Cloud Qwen realtime TTS demo, but there are two main concerns to consider before installing or running it: - Missing declared credential: The registry metadata lists no required env vars, yet SKILL.md and the script require DASHSCOPE_API_KEY (or dashscope_api_key in ~/.alibabacloud/credentials). Ask the publisher to explicitly declare required credentials in the metadata. Do not provide broad credentials until that is fixed. - Local file reads & network activity: The script will read .env files (cwd and repo root) and ~/.alibabacloud/credentials, call the dashscope API, and download audio URLs returned by the service. Ensure you don’t keep unrelated secrets in .env or your home credentials file. Run the demo in an isolated virtualenv and review the dashscope package (pip source or wheel) before installation to confirm it’s the expected SDK. Other practical steps: run the script with a minimal, scoped API key with only TTS permissions; inspect output files under the documented output directory; consider running the probe in a network-restricted environment first. If you need higher assurance, request that the publisher update metadata to declare DASHSCOPE_API_KEY and provide a link to an official dashscope package/source.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dh0mxhrjzy4tyrp7jvvpkxs841gk4

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Category: provider

Model Studio Qwen TTS Realtime

Use realtime TTS models for low-latency streaming speech output.

Critical model names

Use one of these exact model strings:

  • qwen3-tts-flash-realtime
  • qwen3-tts-instruct-flash-realtime
  • qwen3-tts-instruct-flash-realtime-2026-01-22
  • qwen3-tts-vd-realtime-2026-01-15
  • qwen3-tts-vc-realtime-2026-01-15

Prerequisites

  • Install SDK in a virtual environment:
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
  • Set DASHSCOPE_API_KEY in your environment, or add dashscope_api_key to ~/.alibabacloud/credentials.

Normalized interface (tts.realtime)

Request

  • text (string, required)
  • voice (string, required)
  • instruction (string, optional)
  • sample_rate (int, optional)

Response

  • audio_base64_pcm_chunks (array<string>)
  • sample_rate (int)
  • finish_reason (string)

Operational guidance

  • Use websocket or streaming endpoint for realtime mode.
  • Keep each utterance short for lower latency.
  • For instruction models, keep instruction explicit and concise.
  • Some SDK/runtime combinations may reject realtime model calls over MultiModalConversation; use the probe script below to verify compatibility.

Local demo script

Use the probe script to verify realtime compatibility in your current SDK/runtime, and optionally fallback to a non-realtime model for immediate output:

.venv/bin/python skills/ai/audio/aliyun-qwen-tts-realtime/scripts/realtime_tts_demo.py \
  --text "This is a realtime speech demo." \
  --fallback \
  --output output/ai-audio-tts-realtime/audio/fallback-demo.wav

Strict mode (for CI / gating):

.venv/bin/python skills/ai/audio/aliyun-qwen-tts-realtime/scripts/realtime_tts_demo.py \
  --text "realtime health check" \
  --strict

Output location

  • Default output: output/ai-audio-tts-realtime/audio/
  • Override base dir with OUTPUT_DIR.

Validation

mkdir -p output/aliyun-qwen-tts-realtime
for f in skills/ai/audio/aliyun-qwen-tts-realtime/scripts/*.py; do
  python3 -m py_compile "$f"
done
echo "py_compile_ok" > output/aliyun-qwen-tts-realtime/validate.txt

Pass criteria: command exits 0 and output/aliyun-qwen-tts-realtime/validate.txt is generated.

Output And Evidence

  • Save artifacts, command outputs, and API response summaries under output/aliyun-qwen-tts-realtime/.
  • Include key parameters (region/resource id/time range) in evidence files for reproducibility.

Workflow

  1. Confirm user intent, region, identifiers, and whether the operation is read-only or mutating.
  2. Run one minimal read-only query first to verify connectivity and permissions.
  3. Execute the target operation with explicit parameters and bounded scope.
  4. Verify results and save output/evidence files.

References

  • references/sources.md

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…