Alicloud Ai Audio Asr
Transcribe non-realtime speech with Alibaba Cloud Model Studio Qwen ASR models (`qwen3-asr-flash`, `qwen-audio-asr`, `qwen3-asr-flash-filetrans`). Use when c...
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 0 · 131 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The skill's description and SKILL.md describe using Alibaba Cloud DashScope/Qwen ASR and require an API key (DASHSCOPE_API_KEY). However, the skill metadata declares no required environment variables or primary credential. This mismatch (no declared credential but runtime requiring one) is incoherent: a transcription skill legitimately needs an API key, so the metadata should list it.
Instruction Scope
The SKILL.md and bundled script instruct the agent to read environment variables, ~/.alibabacloud/credentials, and local .env files (current working directory and repository root). While reading an ASR API key is expected, automatically loading arbitrary .env files and repo-level .env can pick up unrelated secrets or configuration and incorporate them into the environment used for requests. The script also will base64-encode and upload local audio (data URI), which is expected for local-file transcription but is a potential data exfiltration pathway if users are unaware.
Install Mechanism
No install spec is provided (instruction-only with a helper Python script). The SKILL.md suggests creating a virtualenv but does not download external code or archives. The script uses only Python stdlib, so there is no high-risk install step.
Credentials
The runtime expects DASHSCOPE_API_KEY (and supports ALIBABA_CLOUD_PROFILE/ALICLOUD_PROFILE) but the registry metadata lists no required env vars or primary credential. The script will also load .env files from cwd and repository root and read ~/.alibabacloud/credentials to populate DASHSCOPE_API_KEY. Requiring the ASR API key is proportional to the stated purpose, but silently loading additional .env files and credentials is broader than necessary and not declared.
Persistence & Privilege
The skill does not request always:true and does not attempt to modify other skills or system-wide agent settings. It writes outputs to specified output paths under output/alicloud-ai-audio-asr/, which is consistent with its stated behavior.
What to consider before installing
This skill appears to be a legitimate Alibaba Cloud Qwen ASR helper, but its metadata omits the fact that it needs DASHSCOPE_API_KEY and that the bundled script will read .env files (cwd and repo root) and ~/.alibabacloud/credentials. Before installing:
- Expect to provide DASHSCOPE_API_KEY (or add dashscope_api_key to ~/.alibabacloud/credentials). Prefer setting the env var at runtime rather than leaving keys in repo .env files.
- Be aware the helper will read .env files and the repo root .env (if a .git directory exists). Remove or move any unrelated secrets from those .env files to avoid accidental use.
- The script will base64-encode and upload local audio as data URIs; do not transcribe sensitive audio unless you trust the remote service and network.
- Confirm the endpoints (dashscope.aliyuncs.com) and API behavior match your expectations and that you are comfortable sending audio and transcripts to that service.
- If you require strict least privilege or want the metadata to be accurate, ask the publisher to declare DASHSCOPE_API_KEY (primary credential) in the skill manifest and to stop implicitly loading repository .env files or to make that behavior opt-in.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
Category: provider
Model Studio Qwen ASR (Non-Realtime)
Validation
mkdir -p output/alicloud-ai-audio-asr
python -m py_compile skills/ai/audio/alicloud-ai-audio-asr/scripts/transcribe_audio.py && echo "py_compile_ok" > output/alicloud-ai-audio-asr/validate.txt
Pass criteria: command exits 0 and output/alicloud-ai-audio-asr/validate.txt is generated.
Output And Evidence
- Store transcripts and API responses under
output/alicloud-ai-audio-asr/. - Keep one command log or sample response per run.
Use Qwen ASR for recorded audio transcription (non-realtime), including short audio sync calls and long audio async jobs.
Critical model names
Use one of these exact model strings:
qwen3-asr-flashqwen-audio-asrqwen3-asr-flash-filetrans
Selection guidance:
- Use
qwen3-asr-flashorqwen-audio-asrfor short/normal recordings (sync). - Use
qwen3-asr-flash-filetransfor long-file transcription (async task workflow).
Prerequisites
- Install SDK dependencies (script uses Python stdlib only):
python3 -m venv .venv
. .venv/bin/activate
- Set
DASHSCOPE_API_KEYin environment, or adddashscope_api_keyto~/.alibabacloud/credentials.
Normalized interface (asr.transcribe)
Request
audio(string, required): public URL or local file path.model(string, optional): defaultqwen3-asr-flash.language_hints(array<string>, optional): e.g.zh,en.sample_rate(number, optional)vocabulary_id(string, optional)disfluency_removal_enabled(bool, optional)timestamp_granularities(array<string>, optional): e.g.sentence.async(bool, optional): default false for sync models, true forqwen3-asr-flash-filetrans.
Response
text(string): normalized transcript text.task_id(string, optional): present for async submission.status(string):SUCCEEDEDor submission status.raw(object): original API response.
Quick start (official HTTP API)
Sync transcription (OpenAI-compatible protocol):
curl -sS --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
"model": "qwen3-asr-flash",
"messages": [
{
"role": "user",
"content": [
{
"type": "input_audio",
"input_audio": {
"data": "https://dashscope.oss-cn-beijing.aliyuncs.com/audios/welcome.mp3"
}
}
]
}
],
"stream": false,
"asr_options": {
"enable_itn": false
}
}'
Async long-file transcription (DashScope protocol):
curl -sS --location 'https://dashscope.aliyuncs.com/api/v1/services/audio/asr/transcription' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'X-DashScope-Async: enable' \
--header 'Content-Type: application/json' \
--data '{
"model": "qwen3-asr-flash-filetrans",
"input": {
"file_url": "https://dashscope.oss-cn-beijing.aliyuncs.com/audios/welcome.mp3"
}
}'
Poll task result:
curl -sS --location "https://dashscope.aliyuncs.com/api/v1/tasks/<task_id>" \
--header "Authorization: Bearer $DASHSCOPE_API_KEY"
Local helper script
Use the bundled script for URL/local-file input and optional async polling:
python skills/ai/audio/alicloud-ai-audio-asr/scripts/transcribe_audio.py \
--audio "https://dashscope.oss-cn-beijing.aliyuncs.com/audios/welcome.mp3" \
--model qwen3-asr-flash \
--language-hints zh,en \
--print-response
Long-file mode:
python skills/ai/audio/alicloud-ai-audio-asr/scripts/transcribe_audio.py \
--audio "https://dashscope.oss-cn-beijing.aliyuncs.com/audios/welcome.mp3" \
--model qwen3-asr-flash-filetrans \
--async \
--wait
Operational guidance
- For local files, use
input_audio.data(data URI) when direct URL is unavailable. - Keep
language_hintsminimal to reduce recognition ambiguity. - For async tasks, use 5-20s polling interval with max retry guard.
- Save normalized outputs under
output/alicloud-ai-audio-asr/transcripts/.
Output location
- Default output:
output/alicloud-ai-audio-asr/transcripts/ - Override base dir with
OUTPUT_DIR.
Workflow
- Confirm user intent, region, identifiers, and whether the operation is read-only or mutating.
- Run one minimal read-only query first to verify connectivity and permissions.
- Execute the target operation with explicit parameters and bounded scope.
- Verify results and save output/evidence files.
References
references/api_reference.mdreferences/sources.md- Realtime synthesis is provided by
skills/ai/audio/alicloud-ai-audio-tts-realtime/.
Files
5 totalSelect a file
Select a file to preview.
Comments
Loading comments…
