Install
openclaw skills install alicloud-ai-video-wan-r2vGenerate reference-based videos with Alibaba Cloud Model Studio Wan R2V models (wan2.6-r2v-flash, wan2.6-r2v). Use when creating multi-shot videos from reference video/image material, preserving character style, or documenting reference-to-video request/response flows.
openclaw skills install alicloud-ai-video-wan-r2vCategory: provider
mkdir -p output/alicloud-ai-video-wan-r2v
python -m py_compile skills/ai/video/alicloud-ai-video-wan-r2v/scripts/prepare_r2v_request.py && echo "py_compile_ok" > output/alicloud-ai-video-wan-r2v/validate.txt
Pass criteria: command exits 0 and output/alicloud-ai-video-wan-r2v/validate.txt is generated.
output/alicloud-ai-video-wan-r2v/.Use Wan R2V for reference-to-video generation. This is different from i2v (single image to video).
Use one of these exact model strings:
wan2.6-r2v-flashwan2.6-r2vpython3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
DASHSCOPE_API_KEY in your environment, or add dashscope_api_key to ~/.alibabacloud/credentials.prompt (string, required)reference_video (string | bytes, required)reference_image (string | bytes, optional)duration (number, optional)fps (number, optional)size (string, optional)seed (int, optional)video_url (string)task_id (string, when async)request_id (string)SUCCEEDED or terminal failure status is returned.Prepare a normalized request JSON and validate response schema:
.venv/bin/python skills/ai/video/alicloud-ai-video-wan-r2v/scripts/prepare_r2v_request.py \
--prompt "Generate a short montage with consistent character style" \
--reference-video "https://example.com/reference.mp4"
output/alicloud-ai-video-wan-r2v/videos/OUTPUT_DIR.references/sources.md