Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aliyun Wan Video

v1.0.0

Use when generating videos with Model Studio DashScope SDK using Wan video generation models (wan2.6-t2v, wan2.6-i2v-flash, wan2.6-i2v and regional variants)...

0· 82·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for cinience/aliyun-wan-video.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Aliyun Wan Video" (cinience/aliyun-wan-video) from ClawHub.
Skill page: https://clawhub.ai/cinience/aliyun-wan-video
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install aliyun-wan-video

ClawHub CLI

Package manager switcher

npx clawhub@latest install aliyun-wan-video
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's stated purpose (Aliyun Wan video generation) matches the included Python scripts that call the DashScope SDK, so the capability itself is coherent. However, the registry metadata declares no required credentials while the SKILL.md and scripts clearly require DASHSCOPE_API_KEY (or credentials in ~/.alibabacloud/credentials). That mismatch (no declared env/primary credential but runtime code needing an API key) is unexpected and should be justified.
!
Instruction Scope
SKILL.md validation and output paths are inconsistent with the repository code: the validation command references skills/ai/video/aliyun-wan-video/scripts/generate_video.py (path not present) and SKILL.md states default output 'output/aliyun-wan-video/' while the scripts write to 'output/ai-video-wan-video/'. The instructions also instruct loading .env files and ~/.alibabacloud/credentials — legitimate for auth but broader in scope than the registry metadata indicates. These mismatches can cause validation failures and surprise behavior.
Install Mechanism
There is no install spec in the registry (instruction-only). The SKILL.md recommends installing the 'dashscope' Python package (pip). No remote downloads, installers, or archives are embedded in the skill, so install risk is low aside from the usual risk of third-party Python packages.
!
Credentials
The runtime expects a DASHSCOPE_API_KEY (env var) or a dashscope_api_key in ~/.alibabacloud/credentials; it also auto-loads .env files from CWD or detected repo root. The registry lists no required env vars or primary credential, so the skill's credential demands are not declared. Loading .env and user credentials can expose secrets from the developer/user environment if not understood.
Persistence & Privilege
The skill is not set to always:true and does not attempt to alter other skills or global agent configuration. It reads local files and writes output files under an output directory. Autonomous invocation is allowed (platform default) but is not combined with broad undeclared credentials or always:true, so no elevated persistence red flags beyond normal runtime I/O.
What to consider before installing
Key things to check before installing or running this skill: - Authentication: The registry declares no required env vars, but the SKILL.md and scripts require DASHSCOPE_API_KEY (or dashscope_api_key in ~/.alibabacloud/credentials). Provide a dedicated, least-privilege API key for testing and do not reuse high-privilege keys. - Path mismatches: The SKILL.md validation command and output paths don't match the included scripts' paths and output directory names. Expect the provided validation step to fail unless you adjust paths. Review and fix paths before relying on automated validation. - .env and credentials loading: The scripts auto-load .env files (from cwd or repo root) and read ~/.alibabacloud/credentials. Audit those files for sensitive values you don't want the skill to pick up and consider running in an isolated environment or container. - Third-party package: The skill uses the 'dashscope' Python package. Inspect that package (and pin a specific version) before pip installing in production. - Test safely: Run the scripts in a disposable virtualenv and verify behavior (where files are written, what requests are made) before integrating into a pipeline. If you want a safer posture, ask the skill author to: (1) declare DASHSCOPE_API_KEY as a required credential in the metadata, (2) fix the validation and output-path inconsistencies, and (3) document exactly which files it reads so you can audit them.

Like a lobster shell, security has layers — review code before you run it.

latestvk97exvq6g4emkt495fp8y1bpw18414zc
82downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Category: provider

Model Studio Wan Video

Validation

mkdir -p output/aliyun-wan-video
python -m py_compile skills/ai/video/aliyun-wan-video/scripts/generate_video.py && echo "py_compile_ok" > output/aliyun-wan-video/validate.txt

Pass criteria: command exits 0 and output/aliyun-wan-video/validate.txt is generated.

Output And Evidence

  • Save task IDs, polling responses, and final video URLs to output/aliyun-wan-video/.
  • Keep one end-to-end run log for troubleshooting.

Provide consistent video generation behavior for the video-agent pipeline by standardizing video.generate inputs/outputs and using DashScope SDK (Python) with the exact model name.

Critical model names

Use one of these exact model strings:

  • wan2.6-t2v
  • wan2.6-t2v-us
  • wan2.2-t2v-plus
  • wan2.2-t2v-flash
  • wan2.6-i2v-flash
  • wan2.6-i2v
  • wan2.6-i2v-us
  • wanx2.1-t2v-turbo

Prerequisites

  • Install SDK (recommended in a venv to avoid PEP 668 limits):
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
  • Set DASHSCOPE_API_KEY in your environment, or add dashscope_api_key to ~/.alibabacloud/credentials (env takes precedence).

Normalized interface (video.generate)

Request

  • prompt (string, required)
  • negative_prompt (string, optional)
  • duration (number, required) seconds
  • fps (number, required)
  • size (string, required) e.g. 1280*720
  • seed (int, optional)
  • reference_image (string | bytes, optional for t2v, required for i2v family models)
  • motion_strength (number, optional)

Response

  • video_url (string)
  • duration (number)
  • fps (number)
  • seed (int)

Quick start (Python + DashScope SDK)

Video generation is usually asynchronous. Expect a task ID and poll until completion. Note: Wan i2v models require an input image; pure t2v models such as wan2.6-t2v can omit reference_image.

import os
from dashscope import VideoSynthesis

# Prefer env var for auth: export DASHSCOPE_API_KEY=...
# Or use ~/.alibabacloud/credentials with dashscope_api_key under [default].

def generate_video(req: dict) -> dict:
    payload = {
        "model": req.get("model", "wan2.6-i2v-flash"),
        "prompt": req["prompt"],
        "negative_prompt": req.get("negative_prompt"),
        "duration": req.get("duration", 4),
        "fps": req.get("fps", 24),
        "size": req.get("size", "1280*720"),
        "seed": req.get("seed"),
        "motion_strength": req.get("motion_strength"),
        "api_key": os.getenv("DASHSCOPE_API_KEY"),
    }

    if req.get("reference_image"):
        # DashScope expects img_url for i2v models; local files are auto-uploaded.
        payload["img_url"] = req["reference_image"]

    response = VideoSynthesis.call(**payload)

    # Some SDK versions require polling for the final result.
    # If a task_id is returned, poll until status is SUCCEEDED.
    result = response.output.get("results", [None])[0]

    return {
        "video_url": None if not result else result.get("url"),
        "duration": response.output.get("duration"),
        "fps": response.output.get("fps"),
        "seed": response.output.get("seed"),
    }

Async handling (polling)

import os
from dashscope import VideoSynthesis

task = VideoSynthesis.async_call(
    model=req.get("model", "wan2.6-i2v-flash"),
    prompt=req["prompt"],
    img_url=req["reference_image"],
    duration=req.get("duration", 4),
    fps=req.get("fps", 24),
    size=req.get("size", "1280*720"),
    api_key=os.getenv("DASHSCOPE_API_KEY"),
)

final = VideoSynthesis.wait(task)
video_url = final.output.get("video_url")

Operational guidance

  • Video generation can take minutes; expose progress and allow cancel/retry.
  • Cache by (prompt, negative_prompt, duration, fps, size, seed, reference_image hash, motion_strength).
  • Store video assets in object storage and persist only URLs in metadata.
  • reference_image can be a URL or local path; the SDK auto-uploads local files.
  • If you get Field required: input.img_url, the reference image is missing or not mapped.
  • wan2.6-t2v and wan2.6-t2v-us add multi-shot narrative support and optional audio input according to the official docs.

Size notes

  • Use WxH format (e.g. 1280*720).
  • Prefer common sizes; unsupported sizes can return 400.

Output location

  • Default output: output/aliyun-wan-video/videos/
  • Override base dir with OUTPUT_DIR.

Anti-patterns

  • Do not invent model names or aliases; use official Wan i2v model IDs only.
  • Do not block the UI without progress updates.
  • Do not retry blindly on 4xx; handle validation failures explicitly.

Workflow

  1. Confirm user intent, region, identifiers, and whether the operation is read-only or mutating.
  2. Run one minimal read-only query first to verify connectivity and permissions.
  3. Execute the target operation with explicit parameters and bounded scope.
  4. Verify results and save output/evidence files.

References

  • See references/api_reference.md for DashScope SDK mapping and async handling notes.

  • Source list: references/sources.md

Comments

Loading comments...