Aliyun Liveportrait
v1.0.0Use when generating lightweight talking-head portrait videos with Alibaba Cloud Model Studio LivePortrait (`liveportrait`) from a detected portrait image and...
MIT-0
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The name, description, and included script (which builds JSON requests for liveportrait-detect and liveportrait) are consistent with being an Alibaba Cloud LivePortrait helper. However, the SKILL.md requires a DASHSCOPE_API_KEY (or a dashscope_api_key in ~/.alibabacloud/credentials) even though the skill's manifest lists no required environment variables or primary credential. Expectation: a provider integration would normally declare required credentials; their omission here is an inconsistency.
Instruction Scope
SKILL.md's runtime validation runs python -m py_compile on skills/ai/video/aliyun-liveportrait/scripts/prepare_liveportrait_request.py, but the actual script in the bundle is at scripts/prepare_liveportrait_request.py (different path). That means the provided validation command will fail unless files are moved. The instructions also direct saving normalized payloads and the exact portrait/audio URLs to output/aliyun-liveportrait/ — this is appropriate for reproducibility but is a privacy/data-retention consideration (storing source URLs and request payloads locally). Otherwise the instructions stay within the stated purpose and do not instruct arbitrary file reads or hidden exfiltration.
Install Mechanism
There is no install spec; the skill is instruction-only plus a small helper script. No network downloads or package installs are requested by the bundle itself.
Credentials
The SKILL.md explicitly asks users to set DASHSCOPE_API_KEY or add dashscope_api_key to ~/.alibabacloud/credentials, but the skill metadata declares no required environment variables or primary credential. Additionally, calling Alibaba Cloud APIs normally requires cloud credentials (e.g., AccessKey) — their absence in the manifest is an omission. This mismatch could lead to confusion and accidental use of inappropriate credentials. The number of credentials requested is small, but they are not declared where users expect them.
Persistence & Privilege
The skill is not always:true and does not request persistent privileges. It writes request.json and validation artifacts under output/aliyun-liveportrait/ (user-writable), which is normal for a helper script. It does not modify other skills or system settings.
What to consider before installing
This skill appears to be a thin helper for preparing LivePortrait API requests and is not obviously malicious, but there are several issues you should consider before installing or running it:
- Credentials mismatch: SKILL.md tells you to set DASHSCOPE_API_KEY or add dashscope_api_key to ~/.alibabacloud/credentials, yet the skill metadata declares no required env vars. Treat any credential you provide as sensitive; prefer least-privilege keys and do not use long-lived master credentials.
- Validation path bug: The validation command in SKILL.md references skills/ai/video/aliyun-liveportrait/scripts/prepare_liveportrait_request.py, but the included script lives at scripts/prepare_liveportrait_request.py. Either the SKILL.md is out of date or files are misplaced; verify and fix paths before running automated validation.
- Data retention/privacy: The skill instructs you to save exact portrait/audio URLs and request payloads to output/aliyun-liveportrait/. These may contain PII or links to private content. If you will be working with sensitive images/audio, review and control where output is saved and who can access it.
- Regional constraint: The README notes China (Beijing) only — confirm this matches your deployment needs and legal/regulatory requirements.
- Operational safety: The included script only writes a JSON request file — it does not perform network calls itself. However, before invoking any network step that actually uploads images or submits jobs to Alibaba Cloud, review the exact API calls and ensure the credentials used are appropriate.
If you want to proceed: correct the script path in SKILL.md or move the script to the expected location, declare required env vars in the skill metadata, and use limited-scope credentials. If you’re unsure about providing credentials, do not install or run the skill until those metadata issues are resolved.Like a lobster shell, security has layers — review code before you run it.
latest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
Category: provider
Model Studio LivePortrait
Validation
mkdir -p output/aliyun-liveportrait
python -m py_compile skills/ai/video/aliyun-liveportrait/scripts/prepare_liveportrait_request.py && echo "py_compile_ok" > output/aliyun-liveportrait/validate.txt
Pass criteria: command exits 0 and output/aliyun-liveportrait/validate.txt is generated.
Output And Evidence
- Save normalized request payloads, template choice, and task polling snapshots under
output/aliyun-liveportrait/. - Record the exact portrait/audio URLs and motion-strength related parameters.
Use LivePortrait when the job is lightweight portrait animation with speech audio, especially for longer clips or simpler presenter-style motion.
Critical model names
Use these exact model strings:
liveportrait-detectliveportrait
Selection guidance:
- Run
liveportrait-detectfirst to verify the portrait image. - Use
liveportraitfor the actual video generation task.
Prerequisites
- China mainland (Beijing) only.
- Set
DASHSCOPE_API_KEYin your environment, or adddashscope_api_keyto~/.alibabacloud/credentials. - Input image and audio must be public HTTP/HTTPS URLs.
Normalized interface (video.liveportrait)
Detect Request
model(string, optional): defaultliveportrait-detectimage_url(string, required)
Generate Request
model(string, optional): defaultliveportraitimage_url(string, required)audio_url(string, required)template_id(string, optional):normal,calm, oractiveeye_move_freq(number, optional):0to1video_fps(int, optional):15to30mouth_move_strength(number, optional):0to1.5paste_back(bool, optional)head_move_strength(number, optional):0to1
Response
task_id(string)task_status(string)video_url(string, when finished)
Quick start
python skills/ai/video/aliyun-liveportrait/scripts/prepare_liveportrait_request.py \
--image-url "https://example.com/portrait.png" \
--audio-url "https://example.com/speech.mp3" \
--template-id calm \
--video-fps 24 \
--paste-back
Operational guidance
- Use a clear, front-facing portrait with low occlusion.
- Keep the audio clean and voice-dominant.
paste_back=falseoutputs only the generated face region; keep ittruefor standard talking-head output.- LivePortrait is a better fit than EMO when you need longer, simpler presenter-style clips.
Output location
- Default output:
output/aliyun-liveportrait/request.json - Override base dir with
OUTPUT_DIR.
References
references/sources.md
Files
4 totalSelect a file
Select a file to preview.
Comments
Loading comments…
