Aliyun Qwen Multimodal Embedding
v1.0.0Use when multimodal embeddings are needed from Alibaba Cloud Model Studio models such as `qwen3-vl-embedding` for image, video, and text retrieval, cross-mod...
MIT-0
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
Name/description claim: generate multimodal embedding requests for Alibaba Cloud Model Studio. The included Python script exactly matches that purpose (it builds/writes a request JSON and does not call any network services). However SKILL.md's 'Prerequisites' asks the user to set DASHSCOPE_API_KEY or add credentials to ~/.alibabacloud/credentials and to 'pair this skill with a vector store' — none of which are used by the script. This mismatch looks like copy-paste or over-broad documentation and should be explained by the author.
Instruction Scope
Runtime instructions contain references to environment credentials (DASHSCOPE_API_KEY and ~/.alibabacloud/credentials) and advice to stage files in object storage, but the runtime artifact (scripts/prepare_multimodal_embedding_request.py) only composes JSON and writes to disk. There are no commands that read credentials, call network endpoints, or transmit data. The documentation thus grants broader scope than the code actually performs.
Install Mechanism
This is an instruction-only skill with one small Python helper script and no install spec or remote downloads. No packages are fetched and nothing is written to system-wide locations during install — low install risk.
Credentials
No required env vars or primary credential are declared in registry metadata, but SKILL.md requests DASHSCOPE_API_KEY or an entry in ~/.alibabacloud/credentials. Because the code does not use these, the request for credentials is disproportionate and unexplained. If the skill will later be extended to call cloud APIs, requiring credentials would make sense — but as-is, asking for them is unnecessary and raises the risk of accidental credential exposure.
Persistence & Privilege
The skill does not request always: true and has no install actions that modify other skills or system config. It has normal, limited presence (a single helper script) and no special privileges.
What to consider before installing
The code only prepares and writes a JSON request for Alibaba Cloud multimodal embeddings and does not call any network services. However, the documentation asks you to set DASHSCOPE_API_KEY or add credentials to ~/.alibabacloud/credentials and to pair with a vector store — neither is used by the included script. Before installing or providing credentials: (1) Ask the publisher why an API key is mentioned and whether the skill will ever make requests on your behalf; (2) If you don't need networked calls, do not supply credentials — keep testing in a sandbox; (3) If the skill will be extended to call cloud services, provide a least-privilege key scoped only to the needed API and store it in a secure secret store; (4) Run the included validation (python -m py_compile ...) and inspect any changes the skill makes locally. The inconsistency is likely benign copy-paste, but clarify with the author before supplying secrets or chaining this into an automated pipeline.Like a lobster shell, security has layers — review code before you run it.
latest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
Category: provider
Model Studio Multimodal Embedding
Validation
mkdir -p output/aliyun-qwen-multimodal-embedding
python -m py_compile skills/ai/search/aliyun-qwen-multimodal-embedding/scripts/prepare_multimodal_embedding_request.py && echo "py_compile_ok" > output/aliyun-qwen-multimodal-embedding/validate.txt
Pass criteria: command exits 0 and output/aliyun-qwen-multimodal-embedding/validate.txt is generated.
Output And Evidence
- Save normalized request payloads, selected dimensions, and sample input references under
output/aliyun-qwen-multimodal-embedding/. - Record the exact model, modality mix, and output vector dimension for reproducibility.
Use this skill when the task needs text, image, or video embeddings from Model Studio for retrieval or similarity workflows.
Critical model names
Use one of these exact model strings as needed:
qwen3-vl-embeddingqwen2.5-vl-embeddingtongyi-embedding-vision-plus-2026-03-06
Selection guidance:
- Prefer
qwen3-vl-embeddingfor the newest multimodal embedding path. - Use
qwen2.5-vl-embeddingwhen you need compatibility with an older deployed pipeline.
Prerequisites
- Set
DASHSCOPE_API_KEYin your environment, or adddashscope_api_keyto~/.alibabacloud/credentials. - Pair this skill with a vector store such as DashVector, OpenSearch, or Milvus when building retrieval systems.
Normalized interface (embedding.multimodal)
Request
model(string, optional): defaultqwen3-vl-embeddingtexts(array<string>, optional)images(array<string>, optional): public URLs or local paths uploaded by your client layervideos(array<string>, optional): public URLs where supporteddimension(int, optional): e.g.2560,2048,1536,1024,768,512,256forqwen3-vl-embedding
Response
embeddings(array<object>)dimension(int)usage(object, optional)
Quick start
python skills/ai/search/aliyun-qwen-multimodal-embedding/scripts/prepare_multimodal_embedding_request.py \
--text "A cat sitting on a red chair" \
--image "https://example.com/cat.jpg" \
--dimension 1024
Operational guidance
- Keep
input.contentsas an array; malformed shapes are a common 400 cause. - Pin the output dimension to match your index schema before writing vectors.
- Use the same model and dimension across one vector index to avoid mixed-vector incompatibility.
- For large image or video batches, stage files in object storage and reference stable URLs.
Output location
- Default output:
output/aliyun-qwen-multimodal-embedding/request.json - Override base dir with
OUTPUT_DIR.
References
references/sources.md
Files
4 totalSelect a file
Select a file to preview.
Comments
Loading comments…
