GLM-V-Caption

v1.0.3

Generate captions (descriptions) for images, videos, and documents using ZhiPu GLM-V multimodal model series. Use this skill whenever the user wants to descr...

1· 551· 4 versions· 0 current· 0 all-time· Updated 10h ago· MIT-0
byJared Wen@jaredforreal

Install

openclaw skills install glmv-caption

GLM-V Caption Skill

Generate captions for images, videos, and documents using the ZhiPu GLM-V multimodal model.

When to Use

  • Describe, caption, summarize, or interpret image/video/document content
  • User mentions "describe this image", "caption", "summarize this video", "图片描述", "视频摘要", "文档解读", "看图说话"
  • Extract visual or textual information from media files
  • Compare multiple images
  • User provides an image/video/file and asks what's in it

Supported Input Types

TypeFormatsMax SizeMax CountBase64
Imagejpg, png, jpeg5MB / 6000×6000px50
Videomp4, mkv, mov200MB
Filepdf, docx, txt, xlsx, pptx, jsonl50

⚠️ file_url cannot mix with image_url or video_url in the same request. ⚠️ Videos and files only support URLs — local paths and base64 are NOT supported (images only).

Resource Links

ResourceLink
Get API Keyhttps://bigmodel.cn/usercenter/proj-mgmt/apikeys
API DocsChat Completions / 对话补全

Prerequisites

API Key Setup / API Key 配置(Required / 必需)

This script reads the key from the ZHIPU_API_KEY environment variable and shares it with other Zhipu skills. 脚本通过 ZHIPU_API_KEY 环境变量获取密钥,与其他智谱技能共用同一个 key。

Get Key / 获取 Key: Visit Zhipu Open Platform API Keys / 智谱开放平台 API Keys to create or copy your key.

Setup options / 配置方式(任选一种):

  1. OpenClaw config (recommended) / OpenClaw 配置(推荐): Set in openclaw.json under skills.entries.glmv-caption.env:

    "glmv-caption": { "enabled": true, "env": { "ZHIPU_API_KEY": "你的密钥" } }
    
  2. Shell environment variable / Shell 环境变量: Add to ~/.zshrc:

    export ZHIPU_API_KEY="你的密钥"
    
  3. .env file / .env 文件: Create .env in this skill directory:

    ZHIPU_API_KEY=你的密钥
    

⛔ MANDATORY RESTRICTIONS - DO NOT VIOLATE ⛔

  1. ONLY use GLM-V API — Execute the script python scripts/glmv_caption.py
  2. NEVER caption media yourself — Do NOT try to describe content using built-in vision or any other method
  3. NEVER offer alternatives — Do NOT suggest "I can try to describe it" or similar
  4. IF API fails — Display the error message and STOP immediately
  5. NO fallback methods — Do NOT attempt captioning any other way

📋 Output Display Rules (MANDATORY)

After running the script, you must show the full raw output to the user exactly as returned. Do not summarize, truncate, or only say "generated". Users need the original model output to evaluate quality.

  • Image captioning: show the full caption text
  • Multiple images: show each image result
  • Video/files: show the full understanding result
  • If token usage is included, you may optionally display it

How to Use

Caption an Image

python scripts/glmv_caption.py --images "https://example.com/photo.jpg"
python scripts/glmv_caption.py --images /path/to/photo.png

Caption Multiple Images

python scripts/glmv_caption.py --images img1.jpg img2.png "https://example.com/img3.jpg"

Caption a Video

python scripts/glmv_caption.py --videos "https://example.com/clip.mp4"

Caption a Document

python scripts/glmv_caption.py --files "https://example.com/report.pdf"
python scripts/glmv_caption.py --files "https://example.com/doc1.docx" "https://example.com/doc2.txt"

Custom Prompt

python scripts/glmv_caption.py --images photo.jpg --prompt "Describe the architecture style in detail"

Save Result

python scripts/glmv_caption.py --images photo.jpg --output result.json

Thinking Mode

python scripts/glmv_caption.py --images photo.jpg --thinking

CLI Reference

python {baseDir}/scripts/glmv_caption.py (--images IMG [IMG...] | --videos VID [VID...] | --files FILE [FILE...]) [OPTIONS]
ParameterRequiredDescription
--images, -iOne ofImage paths or URLs (supports multiple, base64 OK)
--videos, -vOne ofVideo paths or URLs (supports multiple, mp4/mkv/mov)
--files, -fOne ofDocument paths or URLs (supports multiple, pdf/docx/txt/xlsx/pptx/jsonl)
--prompt, -pNoCustom prompt (default: "请详细描述这张图片的内容" / "Please describe this image in detail")
--model, -mNoModel name (default: glm-4.6v)
--temperature, -tNoSampling temperature 0-1 (default: 0.8)
--top-pNoNucleus sampling 0.01-1.0 (default: 0.6)
--max-tokensNoMax output tokens (default: 1024, max 32768)
--thinkingNoEnable thinking/reasoning mode
--output, -oNoSave result JSON to file
--prettyNoPretty-print JSON output
--streamNoEnable streaming output

Note: --images, --videos, and --files are mutually exclusive per API limits.

Response Format

{
  "success": true,
  "caption": "A landscape photo showing a mountain range at sunset...",
  "usage": {
    "prompt_tokens": 128,
    "completion_tokens": 256,
    "total_tokens": 384
  }
}

Key fields:

  • success — whether the request succeeded
  • caption — the generated caption text
  • usage — token usage statistics
  • warning — present when content was blocked by safety review
  • error — error details on failure

Error Handling

API key not configured:

ZHIPU_API_KEY not configured. Get your API key at: https://bigmodel.cn/usercenter/proj-mgmt/apikeys

→ Show exact error to user, guide them to configure

Authentication failed (401/403): API key invalid/expired → reconfigure

Rate limit (429): Quota exhausted → inform user to wait

File not found: Local file missing → check path

Content filtered: warning field present → content blocked by safety review

Version tags

latestvk970pd9aw5mgmjs7bjh13xxzy984wvbm

Runtime requirements

🖼️ Clawdis
EnvZHIPU_API_KEY
Primary envZHIPU_API_KEY