glm-understand-image
PassAudited by VirusTotal on May 12, 2026.
Overview
Type: OpenClaw Skill Name: glm-understand-image Version: 1.0.4 The skill is highly suspicious due to critical shell injection vulnerabilities present in `SKILL.md`. The instructions for the AI agent involve embedding user-provided input (API keys, image paths/URLs, prompts) directly into shell commands without any explicit sanitization or quoting. This creates a severe Remote Code Execution (RCE) risk, as a malicious user could craft inputs containing shell metacharacters to execute arbitrary commands on the host system, particularly in the `cat > ~/.openclaw/config/glm.json`, `mcporter config add`, and `mcporter call` commands.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Your GLM API key could be revealed; anyone who obtains it may be able to use your GLM account or incur charges.
The command checks for a saved API key by printing the actual key value to stdout. That can expose a billing/account credential to the agent transcript, logs, or terminal history.
cat ~/.openclaw/config/glm.json 2>/dev/null | python3 -c "import json,sys; d=json.load(sys.stdin); print(d.get('api_key', ''))"Check only whether the key exists without printing it, use a proper secret store or environment variable injection, avoid pasting keys into chat, and rotate the key if it has already been exposed.
Installing or invoking the skill may execute third-party package code on your machine.
The skill relies on npm packages executed through npx without version pinning. This is central to the MCP setup, but it means package updates or package provenance affect what code runs locally.
npx -y mcporter --version ... --command "npx -y @z_ai/mcp-server"
Verify the package source against the official GLM documentation, prefer pinned versions, and run setup only in an environment where you are comfortable executing those packages.
Screenshots or images may contain private information that is sent to the GLM service for analysis.
The workflow passes a user-selected local image path or URL and a prompt to the GLM vision MCP server. This is expected for image analysis, but it means image content may be processed outside the local environment.
mcporter call glm-vision.analyze_image prompt="<对图片的提问>" image_source="<图片路径或URL>"
Use only images you are allowed to share with the provider, and verify the provider’s privacy and retention terms before processing sensitive screenshots or documents.
