Install
openclaw skills install alicloud-ai-video-wan-videoGenerate videos with Model Studio DashScope SDK using Wan i2v models (wan2.6-i2v-flash, wan2.6-i2v, wan2.6-i2v-us). Use when implementing or documenting video.generate requests/responses, mapping prompt/negative_prompt/duration/fps/size/seed/reference_image/motion_strength, or integrating video generation into the video-agent pipeline.
openclaw skills install alicloud-ai-video-wan-videoCategory: provider
mkdir -p output/alicloud-ai-video-wan-video
python -m py_compile skills/ai/video/alicloud-ai-video-wan-video/scripts/generate_video.py && echo "py_compile_ok" > output/alicloud-ai-video-wan-video/validate.txt
Pass criteria: command exits 0 and output/alicloud-ai-video-wan-video/validate.txt is generated.
output/alicloud-ai-video-wan-video/.Provide consistent video generation behavior for the video-agent pipeline by standardizing video.generate inputs/outputs and using DashScope SDK (Python) with the exact model name.
Use one of these exact model strings:
wan2.2-t2v-pluswan2.2-t2v-flashwan2.6-i2v-flashwan2.6-i2vwan2.6-i2v-uswan2.6-t2v-uswanx2.1-t2v-turbopython3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
DASHSCOPE_API_KEY in your environment, or add dashscope_api_key to ~/.alibabacloud/credentials (env takes precedence).prompt (string, required)negative_prompt (string, optional)duration (number, required) secondsfps (number, required)size (string, required) e.g. 1280*720seed (int, optional)reference_image (string | bytes, optional for t2v, required for i2v family models)motion_strength (number, optional)video_url (string)duration (number)fps (number)seed (int)Video generation is usually asynchronous. Expect a task ID and poll until completion.
Note: Wan i2v models require an input image; pure t2v models can omit reference_image.
import os
from dashscope import VideoSynthesis
# Prefer env var for auth: export DASHSCOPE_API_KEY=...
# Or use ~/.alibabacloud/credentials with dashscope_api_key under [default].
def generate_video(req: dict) -> dict:
payload = {
"model": req.get("model", "wan2.6-i2v-flash"),
"prompt": req["prompt"],
"negative_prompt": req.get("negative_prompt"),
"duration": req.get("duration", 4),
"fps": req.get("fps", 24),
"size": req.get("size", "1280*720"),
"seed": req.get("seed"),
"motion_strength": req.get("motion_strength"),
"api_key": os.getenv("DASHSCOPE_API_KEY"),
}
if req.get("reference_image"):
# DashScope expects img_url for i2v models; local files are auto-uploaded.
payload["img_url"] = req["reference_image"]
response = VideoSynthesis.call(**payload)
# Some SDK versions require polling for the final result.
# If a task_id is returned, poll until status is SUCCEEDED.
result = response.output.get("results", [None])[0]
return {
"video_url": None if not result else result.get("url"),
"duration": response.output.get("duration"),
"fps": response.output.get("fps"),
"seed": response.output.get("seed"),
}
import os
from dashscope import VideoSynthesis
task = VideoSynthesis.async_call(
model=req.get("model", "wan2.6-i2v-flash"),
prompt=req["prompt"],
img_url=req["reference_image"],
duration=req.get("duration", 4),
fps=req.get("fps", 24),
size=req.get("size", "1280*720"),
api_key=os.getenv("DASHSCOPE_API_KEY"),
)
final = VideoSynthesis.wait(task)
video_url = final.output.get("video_url")
(prompt, negative_prompt, duration, fps, size, seed, reference_image hash, motion_strength).reference_image can be a URL or local path; the SDK auto-uploads local files.Field required: input.img_url, the reference image is missing or not mapped.WxH format (e.g. 1280*720).output/alicloud-ai-video-wan-video/videos/OUTPUT_DIR.See references/api_reference.md for DashScope SDK mapping and async handling notes.
Source list: references/sources.md