Aliyun Pixverse Generation
v1.0.0Use when generating videos with Alibaba Cloud Model Studio PixVerse models (`pixverse/pixverse-v5.6-t2v`, `pixverse/pixverse-v5.6-it2v`, `pixverse/pixverse-v...
MIT-0
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description, model strings, endpoints, and the provided helper script all align with Alibaba Cloud Model Studio PixVerse video generation. The code prepares JSON payloads for the documented dashscope endpoints, so the capability claim is plausible.
Instruction Scope
SKILL.md instructs the agent to require a DASHSCOPE_API_KEY (or to add a key to ~/.alibabacloud/credentials) and to save normalized payloads to output/, but the registry metadata declares no required env vars. The SKILL.md also claims OUTPUT_DIR can override output location, yet the prepare_aishi_request.py script does not read OUTPUT_DIR. The validation command references a different path (skills/ai/video/...) than where the script actually lives (scripts/prepare_aishi_request.py). These mismatches are scope/implementation inconsistencies that could cause unexpected behavior.
Install Mechanism
No install spec in the registry (instruction-only), but SKILL.md tells users to create a venv and pip install a 'dashscope' package. This is a typical pattern, but the package provenance (dashscope on PyPI or a different source) isn’t verified in the skill — so the install step is plausible but should be validated by the user.
Credentials
SKILL.md requires DASHSCOPE_API_KEY or adding credentials to ~/.alibabacloud/credentials but the registry lists no required environment variables. Requesting an API key for the provider is reasonable, but the omission in metadata and the instruction to write to ~/.alibabacloud/credentials (a shared config path) are notable mismatches and merit caution. Also the skill references OUTPUT_DIR as an override in docs but the script ignores it.
Persistence & Privilege
The skill does not request always:true, does not declare system-wide install behavior, and doesn't attempt to modify other skills or global config. It is instruction-only with a helper script that writes output files under its own output/ directory.
What to consider before installing
This skill appears to actually implement Aliyun PixVerse video generation, but there are several inconsistencies you should resolve before use:
- Credentials: SKILL.md requires DASHSCOPE_API_KEY (or adding dashscope_api_key to ~/.alibabacloud/credentials), but the registry metadata lists no required env vars. Confirm you are comfortable providing an API key and verify the minimum scope/permissions for that key.
- Config file use: The skill suggests storing credentials in ~/.alibabacloud/credentials (a shared file). Prefer using a dedicated API key with limited scope; avoid placing sensitive keys where other processes/users could read them.
- Documentation vs code mismatches: The validation path in SKILL.md points at a different file path than the actual script; SKILL.md says OUTPUT_DIR can override the output directory but the script does not honor OUTPUT_DIR. These are likely documentation mistakes but could cause failures — test in a sandbox first.
- Package provenance: The instruction to pip install 'dashscope' is reasonable, but verify the package source (PyPI or official Alibaba SDK). Confirm the package is the legitimate SDK and review its permissions and release page before installing.
- Region limitation: The skill says the family only supports China mainland (Beijing). Make sure that limitation is acceptable for your data and compliance requirements.
Actions to take before installing: run the included script in an isolated environment (local sandbox or ephemeral container), inspect the network calls during a dry run, verify the dashscope package (or call the API directly using your own client), and only provide API keys with minimal privileges. If you want, ask the skill author to update metadata to declare the required DASHSCOPE_API_KEY and to fix the documented paths and OUTPUT_DIR behavior.Like a lobster shell, security has layers — review code before you run it.
latest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
Category: provider
Model Studio Aishi Video Generation
Validation
mkdir -p output/aliyun-pixverse-generation
python -m py_compile skills/ai/video/aliyun-pixverse-generation/scripts/prepare_aishi_request.py && echo "py_compile_ok" > output/aliyun-pixverse-generation/validate.txt
Pass criteria: command exits 0 and output/aliyun-pixverse-generation/validate.txt is generated.
Output And Evidence
- Save normalized request payloads, chosen model variant, and task polling snapshots under
output/aliyun-pixverse-generation/. - Record region, resolution/size, duration, and whether audio generation was enabled.
Use Aishi when the user explicitly wants the non-Wan PixVerse family for video generation.
Critical model names
Use one of these exact model strings:
pixverse/pixverse-v5.6-t2vpixverse/pixverse-v5.6-it2vpixverse/pixverse-v5.6-kf2vpixverse/pixverse-v5.6-r2v
Selection guidance:
- Use
pixverse/pixverse-v5.6-t2vfor text-only generation. - Use
pixverse/pixverse-v5.6-it2vfor first-frame image-to-video. - Use
pixverse/pixverse-v5.6-kf2vfor first-frame + last-frame transitions. - Use
pixverse/pixverse-v5.6-r2vfor multi-image character/style consistency.
Prerequisites
- This family currently only supports China mainland (Beijing).
- Install SDK or call HTTP directly:
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
- Set
DASHSCOPE_API_KEYin your environment, or adddashscope_api_keyto~/.alibabacloud/credentials.
Normalized interface (video.generate)
Request
model(string, required)prompt(string, optional forit2v, required for other variants)media(array<object>, optional)size(string, optional): direct pixel size such as1280*720, used byt2vandr2vresolution(string, optional):360P/540P/720P/1080P, used byit2vandkf2vduration(int, required):5/8/10, except 1080P only supports5/8audio(bool, optional)watermark(bool, optional)seed(int, optional)
Response
task_id(string)task_status(string)video_url(string, when finished)
Endpoint and execution model
- Submit task:
POST https://dashscope.aliyuncs.com/api/v1/services/aigc/video-generation/video-synthesis - Poll task:
GET https://dashscope.aliyuncs.com/api/v1/tasks/{task_id} - HTTP calls are async only and must set header
X-DashScope-Async: enable.
Quick start
Text-to-video:
python skills/ai/video/aliyun-pixverse-generation/scripts/prepare_aishi_request.py \
--model pixverse/pixverse-v5.6-t2v \
--prompt "A compact robot walks through a rainy neon alley." \
--size 1280*720 \
--duration 5
Image-to-video:
python skills/ai/video/aliyun-pixverse-generation/scripts/prepare_aishi_request.py \
--model pixverse/pixverse-v5.6-it2v \
--prompt "The turtle swims slowly as the camera rises." \
--media image_url=https://example.com/turtle.webp \
--resolution 720P \
--duration 5
Operational guidance
t2vandr2vusesize;it2vandkf2vuseresolution.- For
kf2v, provide exactly onefirst_frameand onelast_frame. - For
r2v, you can pass up to 7 reference images. - Aishi returns task IDs first; do not treat the initial response as the final video result.
Output location
- Default output:
output/aliyun-pixverse-generation/request.json - Override base dir with
OUTPUT_DIR.
References
references/sources.md
Files
4 totalSelect a file
Select a file to preview.
Comments
Loading comments…
