Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aliyun Emoji

v1.0.0

Use when generating template-driven emoji videos with Alibaba Cloud Model Studio Emoji (`emoji-v1`) from a detected portrait image. Use when producing fixed-...

0· 0·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
Name, description, and included script align with generating emoji videos via Alibaba Cloud Model Studio (detect then generate). However, the SKILL.md asks for an environment API key (DASHSCOPE_API_KEY) or a credentials entry in ~/.alibabacloud/credentials even though the registry metadata lists no required env vars or primary credential — this is an inconsistency.
!
Instruction Scope
The instructions ask the agent/operator to set DASHSCOPE_API_KEY or add dashscope_api_key to ~/.alibabacloud/credentials. The skill manifest did not declare this env var; the instructions also require saving request payloads and detection outputs to disk. While saving outputs and creating a normalized request is reasonable, referencing an undeclared credential and a specific credentials file path is out-of-band relative to the skill metadata and should be clarified.
Install Mechanism
No install spec; this is an instruction-only skill with a small helper script that only writes a JSON payload. Nothing is downloaded or executed beyond local Python usage; low install risk.
!
Credentials
The SKILL.md requires DASHSCOPE_API_KEY or a dashscope_api_key entry in ~/.alibabacloud/credentials but the skill's declared required env vars and primary credential are empty. Requesting a secret without declaring it in metadata is disproportionate and makes it unclear what exact credential is needed and why.
Persistence & Privilege
The skill does not request always-on presence and does not modify other skills or system-wide settings. It only writes outputs to an output/aliyun-emoji directory as instructed; no elevated persistence privileges are requested.
What to consider before installing
This skill's code is simple and only prepares JSON requests, but the SKILL.md asks you to provide a DASHSCOPE_API_KEY or add dashscope_api_key to ~/.alibabacloud/credentials while the registry metadata lists no required env vars — that mismatch is the main concern. Before installing: (1) Confirm the skill's source/origin and trustworthiness (homepage is missing). (2) Ask the publisher what DASHSCOPE_API_KEY refers to and whether standard Alibaba Cloud credentials (AccessKey ID/Secret) are required instead. (3) Prefer setting minimal-scoped test credentials or using a disposable key rather than your primary cloud account secrets. (4) Run the provided validation locally (python -m py_compile ...) and inspect the generated request.json contents to ensure only expected data (image URL, bboxes, template ID) would be sent. (5) If you will provide credentials, check how the agent runtime will use them and whether network calls go only to official Alibaba endpoints. If the publisher cannot clearly justify the DASHSCOPE_API_KEY and its absence from metadata, treat the skill as untrusted.

Like a lobster shell, security has layers — review code before you run it.

latestvk974q3fmgvvrc7kh5fd2yrewph841ytf

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Category: provider

Model Studio Emoji

Validation

mkdir -p output/aliyun-emoji
python -m py_compile skills/ai/video/aliyun-emoji/scripts/prepare_emoji_request.py && echo "py_compile_ok" > output/aliyun-emoji/validate.txt

Pass criteria: command exits 0 and output/aliyun-emoji/validate.txt is generated.

Output And Evidence

  • Save normalized request payloads, detected face boxes, selected template ID, and task polling snapshots under output/aliyun-emoji/.
  • Record the exact portrait URL and whether detection passed.

Use Emoji when the user wants a fixed-template facial animation clip rather than open-ended video generation.

Critical model names

Use these exact model strings:

  • emoji-detect-v1
  • emoji-v1

Selection guidance:

  • Run emoji-detect-v1 first to obtain face_bbox and ext_bbox_face.
  • Use emoji-v1 only after detection succeeds.

Prerequisites

  • China mainland (Beijing) only.
  • Set DASHSCOPE_API_KEY in your environment, or add dashscope_api_key to ~/.alibabacloud/credentials.
  • Input image must be a public HTTP/HTTPS URL.

Normalized interface (video.emoji)

Detect Request

  • model (string, optional): default emoji-detect-v1
  • image_url (string, required)

Generate Request

  • model (string, optional): default emoji-v1
  • image_url (string, required)
  • face_bbox (array<int>, required)
  • ext_bbox_face (array<int>, required)
  • template_id (string, required)

Response

  • task_id (string)
  • task_status (string)
  • video_url (string, when finished)

Quick start

python skills/ai/video/aliyun-emoji/scripts/prepare_emoji_request.py \
  --image-url "https://example.com/portrait.png" \
  --face-bbox 302,286,610,593 \
  --ext-bbox-face 71,9,840,778 \
  --template-id emoji_001

Operational guidance

  • Use a single-person, front-facing portrait with no face occlusion.
  • Template IDs come from the official template list or console experience; do not invent them in production calls.
  • Emoji output is a person video clip, not a sticker pack or text overlay asset.

Output location

  • Default output: output/aliyun-emoji/request.json
  • Override base dir with OUTPUT_DIR.

References

  • references/sources.md

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…