Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aliyun Modelstudio Entry

v1.0.0

Use when routing Alibaba Cloud Model Studio requests to the right local skill (Qwen text, coder, deep research, image, video, audio, search and multimodal sk...

0· 10·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill is a routing entry that forwards Model Studio requests to local Alibaba Cloud-related sub-skills; the listed target skill paths and routing behavior match the stated purpose. However, the SKILL.md expects use of the 'dashscope' SDK and a DASHSCOPE_API_KEY, which are not declared in the registry metadata's required env vars or install spec (a minor but important inconsistency).
Instruction Scope
Instructions stay within routing/SDK usage (polling async tasks, mapping capabilities to target skills, saving outputs). They do instruct calls to https://dashscope.aliyuncs.com and saving evidence files that include 'region/resource id/time range' — this is relevant to the task but means the skill will collect and write potentially sensitive environment identifiers to disk. The SKILL.md also tells operators to install and run code locally (pip install dashscope) and to set API keys.
Install Mechanism
No install spec is in the registry (instruction-only), but SKILL.md tells the user to pip install the 'dashscope' package into a virtualenv. That is a standard, moderate-risk package install step; the registry should either declare this requirement or include an install spec so operators know what will be installed. There's no external or obscure URL download, which is good.
!
Credentials
The runtime instructions require DASHSCOPE_API_KEY (or a credentials file at ~/.alibabacloud/credentials) and show using it in Authorization headers, but the skill metadata lists no required env vars or primary credential. That mismatch is concerning because users may not realize they'll need to provide an API key; the instructions also ask to save region/resource IDs and other parameters into output artifacts, increasing the chance of persisting sensitive identifiers. Require/declare only the minimum privileges and document them in metadata.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. It writes output/evidence files to an output/ directory (as described), which is normal for CLI/SDK workflows. There is no indication it modifies other skills or global agent settings.
What to consider before installing
This skill mostly does what it says — route Model Studio requests to local Alibaba-related skills — but there are a few issues to consider before installing or running it: - The SKILL.md requires installing the 'dashscope' Python package and setting DASHSCOPE_API_KEY (or using ~/.alibabacloud/credentials), yet the registry metadata lists no required env vars or install steps. Treat this as a red flag: the skill will make outbound API calls and needs a credential you must supply. - The skill instructs saving 'region/resource id/time range' and other evidence to output files. Those files can contain sensitive identifiers; ensure your output directory is secure and you understand what will be recorded. - Because this is instruction-only, nothing is written into the platform by default, but you will run pip install locally — inspect the 'dashscope' package source (or vendor) before installing if you have concerns. Recommendations: - Ask the publisher to update registry metadata to declare DASHSCOPE_API_KEY (or document explicitly why it isn't required). - Provide the least-privilege API key possible (e.g., read-only or scoped) and prefer temporary or project-scoped credentials. - Review the referenced target sub-skills (their SKILL.md files) before enabling routing to ensure they do not request unrelated credentials. - Run the tooling inside an isolated virtualenv and inspect network calls (or run in a restricted environment) if you need to audit behavior. If you cannot verify the dashscope package or the target sub-skills, treat this skill as untrusted until those checks are completed.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f815bxdqffyph1rwdmshsmx84173m

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Category: task

Alibaba Cloud Model Studio Entry (Routing)

Route requests to existing local skills to avoid duplicating model/parameter details.

Prerequisites

  • Install SDK (virtual environment recommended to avoid PEP 668 restrictions):
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
  • Configure DASHSCOPE_API_KEY (environment variable preferred; or dashscope_api_key in ~/.alibabacloud/credentials).

Routing Table (currently supported in this repo)

NeedTarget skill
Text generation / reasoning / tool-callingskills/ai/text/aliyun-qwen-generation/
Coding / repository reasoningskills/ai/code/aliyun-qwen-coder/
Deep multi-step researchskills/ai/research/aliyun-qwen-deep-research/
Text-to-image / image generationskills/ai/image/aliyun-qwen-image/
Image editingskills/ai/image/aliyun-qwen-image-edit/
Text-to-video / image-to-video (t2v/i2v)skills/ai/video/aliyun-wan-video/
Non-Wan PixVerse video generationskills/ai/video/aliyun-pixverse-generation/
Reference-to-video (r2v)skills/ai/video/aliyun-wan-r2v/
Digital human talking / singing avatarskills/ai/video/aliyun-wan-digital-human/
Expressive portrait video (EMO)skills/ai/video/aliyun-emo/
Lightweight portrait animation (LivePortrait)skills/ai/video/aliyun-liveportrait/
Motion transfer / dancing avatar (AnimateAnyone)skills/ai/video/aliyun-animate-anyone/
Emoji / meme portrait videoskills/ai/video/aliyun-emoji/
Text-to-speech (TTS)skills/ai/audio/aliyun-qwen-tts/
Speech recognition/transcription (ASR)skills/ai/audio/aliyun-qwen-asr/
Realtime speech recognitionskills/ai/audio/aliyun-qwen-asr-realtime/
Realtime TTSskills/ai/audio/aliyun-qwen-tts-realtime/
Live speech translationskills/ai/audio/aliyun-qwen-livetranslate/
CosyVoice voice cloneskills/ai/audio/aliyun-cosyvoice-voice-clone/
CosyVoice voice designskills/ai/audio/aliyun-cosyvoice-voice-design/
Voice cloneskills/ai/audio/aliyun-qwen-tts-voice-clone/
Voice designskills/ai/audio/aliyun-qwen-tts-voice-design/
Omni multimodal interactionskills/ai/multimodal/aliyun-qwen-omni/
Visual reasoningskills/ai/multimodal/aliyun-qvq/
OCR / document parsing / table parsingskills/ai/multimodal/aliyun-qwen-ocr/
Text embeddingsskills/ai/search/aliyun-qwen-text-embedding/
Multimodal embeddingsskills/ai/search/aliyun-qwen-multimodal-embedding/
Rerankskills/ai/search/aliyun-qwen-rerank/
Vector retrievalskills/ai/search/aliyun-dashvector-search/ or skills/ai/search/aliyun-opensearch-search/ or skills/ai/search/aliyun-milvus-search/
Document understandingskills/ai/text/aliyun-docmind-extract/
Video editingskills/ai/video/aliyun-wan-edit/
Video lip-sync replacement / retalkskills/ai/video/aliyun-videoretalk/
Model list crawl/updateskills/ai/misc/aliyun-modelstudio-crawl-and-skill/

When Not Matched

  • Clarify model capability and input/output type first.
  • If capability is missing in repo, add a new skill first.

Common Missing Capabilities In This Repo (remaining gaps)

  • image translation

  • virtual try-on / digital human / advanced video personas

  • For multimodal/ASR download failures, prefer public URLs listed above.

  • For ASR parameter errors, use data URI in input_audio.data.

  • For multimodal embedding 400, ensure input.contents is an array.

Async Task Polling Template (video/long-running tasks)

When X-DashScope-Async: enable returns task_id, poll as follows:

GET https://dashscope.aliyuncs.com/api/v1/tasks/<task_id>
Authorization: Bearer $DASHSCOPE_API_KEY

Example result fields (success):

{
  "output": {
    "task_status": "SUCCEEDED",
    "video_url": "https://..."
  }
}

Notes:

  • Recommended polling interval: 15-20 seconds, max 10 attempts.
  • After success, download output.video_url.

Clarifying questions (ask when uncertain)

  1. Are you working with text, image, audio, or video?
  2. Is this generation, editing/understanding, or retrieval?
  3. Do you need speech (TTS/ASR/live translate) or retrieval (embedding/rerank/vector DB)?
  4. Do you want runnable SDK scripts or just API/parameter guidance?

References

  • Model list and links:output/alicloud-model-studio-models-summary.md

  • API/parameters/examples: see target sub-skill SKILL.md and references/*.md

  • Official source list:references/sources.md

Validation

mkdir -p output/aliyun-modelstudio-entry
echo "validation_placeholder" > output/aliyun-modelstudio-entry/validate.txt

Pass criteria: command exits 0 and output/aliyun-modelstudio-entry/validate.txt is generated.

Output And Evidence

  • Save artifacts, command outputs, and API response summaries under output/aliyun-modelstudio-entry/.
  • Include key parameters (region/resource id/time range) in evidence files for reproducibility.

Workflow

  1. Confirm user intent, region, identifiers, and whether the operation is read-only or mutating.
  2. Run one minimal read-only query first to verify connectivity and permissions.
  3. Execute the target operation with explicit parameters and bounded scope.
  4. Verify results and save output/evidence files.

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…