Aliyun Animate Anyone
v1.0.0Use when generating dance or motion-transfer videos with Alibaba Cloud Model Studio AnimateAnyone (`animate-anyone-gen2`) using a detected character image an...
MIT-0
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name, description, model names, and included helper script all align with an Alibaba Cloud AnimateAnyone integration. However, the SKILL.md lists a required API key (DASHSCOPE_API_KEY) and an Alibaba Cloud credentials file path that are not declared in the skill's top-level requirements, which is an inconsistency between claimed needs and declared requirements.
Instruction Scope
Runtime instructions ask the agent to set an environment variable or add a key to ~/.alibabacloud/credentials and to use public HTTP/HTTPS input URLs; the included script itself only writes a JSON request and does not perform network calls. The instructions therefore expect access to credentials/config files and to call external Alibaba APIs, but those accesses are not reflected in the declared requirements. The skill's instructions do not explicitly restrict what agent context may be read beyond that credentials path.
Install Mechanism
No install spec is present and the only code writes request JSON locally; there is no download-from-URL or package installation. This is low-risk from an install/execution perspective.
Credentials
The SKILL.md requires DASHSCOPE_API_KEY or a dashscope_api_key entry in ~/.alibabacloud/credentials, but the skill manifest declares no required environment variables or primary credential. This mismatch is disproportionate to the manifest and should be corrected. Also note that placing keys in ~/.alibabacloud/credentials may grant access to other Alibaba Cloud APIs unless a scoped/minimal key is used.
Persistence & Privilege
The skill does not request always:true, does not modify other skills, and does not install persistent components. It is not requesting elevated or permanent presence.
What to consider before installing
This skill appears to be a simple request-preparer for Alibaba Cloud AnimateAnyone, but the SKILL.md expects an API key (DASHSCOPE_API_KEY) or an entry in ~/.alibabacloud/credentials while the manifest lists no required credentials — that's an inconsistency. Before installing or using it: 1) confirm with the skill author why credentials weren't declared and whether the agent will actually call Alibaba APIs (the script alone does not). 2) If you must provide credentials, create a minimal-scope API key limited to only the necessary AnimateAnyone actions and region (Beijing) rather than reusing broad account credentials. 3) Avoid putting sensitive keys into shared global credential files unless you understand the scope; prefer dedicated env vars if possible. 4) Ensure input files truly need to be public URLs (the skill requires that) and that exposing those URLs is acceptable. If the author cannot justify the missing manifest fields, treat the skill as untrusted until fixed.Like a lobster shell, security has layers — review code before you run it.
latest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
Category: provider
Model Studio AnimateAnyone
Validation
mkdir -p output/aliyun-animate-anyone
python -m py_compile skills/ai/video/aliyun-animate-anyone/scripts/prepare_animate_anyone_request.py && echo "py_compile_ok" > output/aliyun-animate-anyone/validate.txt
Pass criteria: command exits 0 and output/aliyun-animate-anyone/validate.txt is generated.
Output And Evidence
- Save normalized request payloads, detection outputs, template IDs, and task polling snapshots under
output/aliyun-animate-anyone/. - Record whether the result should keep the reference image background or the source video background.
Use AnimateAnyone when the task needs motion transfer from a template video rather than plain talking-head animation.
Critical model names
Use these exact model strings:
animate-anyone-detect-gen2animate-anyone-template-gen2animate-anyone-gen2
Selection guidance:
- Run image detection first.
- Run template generation on the source motion video.
- Use
animate-anyone-gen2for the final video job.
Prerequisites
- China mainland (Beijing) only.
- Set
DASHSCOPE_API_KEYin your environment, or adddashscope_api_keyto~/.alibabacloud/credentials. - Input files must be public HTTP/HTTPS URLs.
Normalized interface (video.animate_anyone)
Detect Request
model(string, optional): defaultanimate-anyone-detect-gen2image_url(string, required)
Template Request
model(string, optional): defaultanimate-anyone-template-gen2video_url(string, required)
Generate Request
model(string, optional): defaultanimate-anyone-gen2image_url(string, required)template_id(string, required)use_ref_img_bg(bool, optional): whether to keep the input image background
Response
task_id(string)task_status(string)video_url(string, when finished)
Quick start
python skills/ai/video/aliyun-animate-anyone/scripts/prepare_animate_anyone_request.py \
--image-url "https://example.com/dancer.png" \
--template-id "tmpl_xxx" \
--use-ref-img-bg
Operational guidance
- The action template must come from the official template-generation API.
- Full-body images work best when
use_ref_img_bg=false; half-body images are not recommended in that mode. - This skill is best for dancing or large body motion transfer, not generic talking-head tasks.
Output location
- Default output:
output/aliyun-animate-anyone/request.json - Override base dir with
OUTPUT_DIR.
References
references/sources.md
Files
4 totalSelect a file
Select a file to preview.
Comments
Loading comments…
