Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aliyun Modelstudio Entry Test

v1.0.0

Use when running a minimal test matrix for the Model Studio skills that exist in this repo, including image/video/audio, realtime speech, omni, visual reason...

0· 108·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for cinience/aliyun-modelstudio-entry-test.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Aliyun Modelstudio Entry Test" (cinience/aliyun-modelstudio-entry-test) from ClawHub.
Skill page: https://clawhub.ai/cinience/aliyun-modelstudio-entry-test
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install aliyun-modelstudio-entry-test

ClawHub CLI

Package manager switcher

npx clawhub@latest install aliyun-modelstudio-entry-test
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (minimal test matrix for Alibaba Model Studio skills) align with the runtime instructions: open sub-skill SKILL.md files, run one small request per capability, and save results. The required actions (SDK calls to Alibaba services) are expected for this purpose.
Instruction Scope
Instructions explicitly tell the operator/agent to read sub-skill SKILL.md files in the repo, run SDK calls, and write output/evidence under output/. They also instruct checking user intent/region and to include certain parameters in evidence files. Reading repo files and writing results is within scope; however the SKILL.md also references reading ~/.alibabacloud/credentials as an alternative auth source — that touches a user credential file outside the skill directory and should be handled carefully.
Install Mechanism
There is no install spec in registry metadata, but SKILL.md instructs creating a venv and running 'pip install dashscope'. Installing a third-party pip package at runtime is common for SDKs but carries moderate risk: the package origin/version is not pinned or proven here. Using an isolated venv is recommended (and the instructions suggest one).
!
Credentials
SKILL.md requires DASHSCOPE_API_KEY (or credentials stored in ~/.alibabacloud/credentials) to run, but the skill metadata lists no required env vars and no primary credential. This mismatch means the registry entry underreports credential needs. Requiring an Alibaba API key is proportionate to the stated goal, but the missing declaration is an important coherence/visibility issue.
Persistence & Privilege
The skill does not request always:true and is not requesting to modify other skills or system-wide settings. It is user-invocable and allows autonomous invocation by default (platform normal). No additional persistence or elevated privileges are requested by the manifest.
Scan Findings in Context
[NO_SCAN_FINDINGS] expected: The regex-based scanner had nothing to analyze because this is an instruction-only skill with no code files. That absence of findings is not evidence of safety; the SKILL.md itself contains notable instructions (installing dashscope and using DASHSCOPE_API_KEY).
What to consider before installing
Before installing or running this skill: 1) Treat the SKILL.md as authoritative — it requires installing a third-party Python package ('dashscope') and providing an Alibaba API key (DASHSCOPE_API_KEY or ~/.alibabacloud/credentials), but the registry metadata does not declare those env vars. 2) Use an isolated virtual environment as instructed, and consider reviewing the 'dashscope' package source or pinning a trusted release/version before pip installing. 3) Avoid putting long-lived or high-privilege credentials in the environment or in ~/.alibabacloud/credentials for testing; create a scoped test API key with minimal permissions and rotate/delete it after use. 4) Inspect each referenced sub-skill's SKILL.md (the test will open and execute those) so you understand additional auth needs or network endpoints. 5) When saving evidence/output, ensure you do not accidentally write secrets or full API responses containing sensitive data. If you want me to proceed with a deeper review, provide the contents of the referenced sub-skill SKILL.md files and/or the dashscope package origin/version.

Like a lobster shell, security has layers — review code before you run it.

latestvk975awvqk9e4xk1d88xqq0vf3d841yd5
108downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Category: task

Model Studio Skills Minimal Test

Run minimal validation for currently available Model Studio skills in this repo and record results.

Prerequisites

  • Install SDK (virtual environment recommended to avoid PEP 668 restrictions):
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
  • Configure DASHSCOPE_API_KEY (environment variable preferred; or dashscope_api_key in ~/.alibabacloud/credentials).

Test Matrix (currently supported)

  1. Text-to-image → skills/ai/image/aliyun-qwen-image/
  2. Image editing → skills/ai/image/aliyun-qwen-image-edit/
  3. Text-to-video / Image-to-video (i2v) → skills/ai/video/aliyun-wan-video/
  4. Reference-to-video (r2v) → skills/ai/video/aliyun-wan-r2v/
  5. TTS → skills/ai/audio/aliyun-qwen-tts/
  6. ASR transcription (non-realtime) → skills/ai/audio/aliyun-qwen-asr/
  7. Realtime ASR → skills/ai/audio/aliyun-qwen-asr-realtime/
  8. Realtime TTS → skills/ai/audio/aliyun-qwen-tts-realtime/
  9. Live speech translation → skills/ai/audio/aliyun-qwen-livetranslate/
  10. CosyVoice voice clone → skills/ai/audio/aliyun-cosyvoice-voice-clone/
  11. CosyVoice voice design → skills/ai/audio/aliyun-cosyvoice-voice-design/
  12. Voice clone → skills/ai/audio/aliyun-qwen-tts-voice-clone/
  13. Voice design → skills/ai/audio/aliyun-qwen-tts-voice-design/
  14. Omni multimodal → skills/ai/multimodal/aliyun-qwen-omni/
  15. Visual reasoning → skills/ai/multimodal/aliyun-qvq/
  16. Text embedding → skills/ai/search/aliyun-qwen-text-embedding/
  17. Rerank → skills/ai/search/aliyun-qwen-rerank/
  18. Video editing → skills/ai/video/aliyun-wan-edit/

If new capability tests are needed, create corresponding skill first (use skills/ai/misc/aliyun-modelstudio-crawl-and-skill/ to refresh model list).

Minimal Flow Per Capability

  1. Open target sub-skill directory and read SKILL.md.
  2. Choose one minimal input example and recommended model.
  3. Run SDK call or script.
  4. Record model, request summary, response summary, duration, and status.

Result Template

Save as output/aliyun-modelstudio-entry-test-results.md:

# Model Studio Skill Test Results

- Date: YYYY-MM-DD
- Environment: region / API_BASE / auth method

| Capability | Sub-skill | Model | Request summary | Result summary | Status | Notes |
| --- | --- | --- | --- | --- | --- | --- |
| Text-to-image | aliyun-qwen-image | <model-id> | ... | ... | pass/fail | ... |
| Image editing | aliyun-qwen-image-edit | <model-id> | ... | ... | pass/fail | ... |
| Image-to-video (i2v) | aliyun-wan-video | <model-id> | ... | ... | pass/fail | ... |
| Reference-to-video (r2v) | aliyun-wan-r2v | <model-id> | ... | ... | pass/fail | ... |
| TTS | aliyun-qwen-tts | <model-id> | ... | ... | pass/fail | ... |
| ASR (non-realtime) | aliyun-qwen-asr | <model-id> | ... | ... | pass/fail | ... |
| Realtime ASR | aliyun-qwen-asr-realtime | <model-id> | ... | ... | pass/fail | ... |
| Realtime TTS | aliyun-qwen-tts-realtime | <model-id> | ... | ... | pass/fail | ... |
| Live speech translation | aliyun-qwen-livetranslate | <model-id> | ... | ... | pass/fail | ... |
| CosyVoice voice clone | aliyun-cosyvoice-voice-clone | <model-id> | ... | ... | pass/fail | ... |
| CosyVoice voice design | aliyun-cosyvoice-voice-design | <model-id> | ... | ... | pass/fail | ... |
| Voice clone | aliyun-qwen-tts-voice-clone | <model-id> | ... | ... | pass/fail | ... |
| Voice design | aliyun-qwen-tts-voice-design | <model-id> | ... | ... | pass/fail | ... |
| Omni multimodal | aliyun-qwen-omni | <model-id> | ... | ... | pass/fail | ... |
| Visual reasoning | aliyun-qvq | <model-id> | ... | ... | pass/fail | ... |
| Text embedding | aliyun-qwen-text-embedding | <model-id> | ... | ... | pass/fail | ... |
| Rerank | aliyun-qwen-rerank | <model-id> | ... | ... | pass/fail | ... |
| Video editing | aliyun-wan-edit | <model-id> | ... | ... | pass/fail | ... |

Failure Handling

  • If parameters are unclear, check target sub-skill SKILL.md or references/*.md.
  • If model is unavailable, refresh model list and retry.
  • For auth issues, verify DASHSCOPE_API_KEY (env var or ~/.alibabacloud/credentials).

Validation

mkdir -p output/aliyun-modelstudio-entry-test
echo "validation_placeholder" > output/aliyun-modelstudio-entry-test/validate.txt

Pass criteria: command exits 0 and output/aliyun-modelstudio-entry-test/validate.txt is generated.

Output And Evidence

  • Save artifacts, command outputs, and API response summaries under output/aliyun-modelstudio-entry-test/.
  • Include key parameters (region/resource id/time range) in evidence files for reproducibility.

Workflow

  1. Confirm user intent, region, identifiers, and whether the operation is read-only or mutating.
  2. Run one minimal read-only query first to verify connectivity and permissions.
  3. Execute the target operation with explicit parameters and bounded scope.
  4. Verify results and save output/evidence files.

References

  • Source list: references/sources.md

Comments

Loading comments...