Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Ms Ai

v1.2.1

ModelScope AI 技能:生图、改图、析图、生文。支持文生图、图生图、视觉理解、文本生成,遇到限速自动轮换模型。

0· 168·1 current·1 all-time
byLuhui WANG@luhuiwang

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for luhuiwang/ms-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Ms Ai" (luhuiwang/ms-ai) from ClawHub.
Skill page: https://clawhub.ai/luhuiwang/ms-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ms-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install ms-ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
Name, description, SKILL.md and the Python scripts consistently implement text/image generation and vision features via the ModelScope API (api-inference.modelscope.cn). Functionality matches the stated purpose. However the registry metadata declared no required environment variables/primary credential while the code and SKILL.md clearly require MODELSCOPE_API_KEY (supporting multiple comma-separated keys). This mismatch is an incoherence: the skill will fail without the key but the registry does not advertise that requirement.
!
Instruction Scope
SKILL.md and scripts are explicit about what to run (pip install requests Pillow; run the provided scripts) and instruct editing ~/.openclaw/openclaw.json to supply MODELSCOPE_API_KEY. The scripts only read image files, a history JSON, and the MODELSCOPE_API_KEY; they transmit images/prompts to ModelScope endpoints (expected). A notable issue: the code prints the first ~12 characters of each API key to stderr when selecting keys (potential secret leakage into logs). Otherwise instructions do not request unrelated system files or unrelated credentials.
Install Mechanism
There is no install spec that downloads arbitrary code; the package includes Python scripts and documentation. Runtime requires pip packages (requests, Pillow) which the SKILL.md documents. No external binary downloads or obscure URLs are used by the install process. Network calls at runtime go to the documented ModelScope endpoints.
!
Credentials
The skill legitimately needs MODELSCOPE_API_KEY (and supports supplying multiple keys). Requesting a single API key for the service the skill integrates with is proportionate. However the registry metadata omits declaring this requirement (it lists no required env vars), which is misleading. Additionally, scripts reveal the first ~12 characters of each API key in stderr, which could leak key fragments into logs/telemetry; this is unnecessary and increases risk. No other unrelated credentials are requested.
Persistence & Privilege
The skill does not set always:true and does not request elevated platform privileges. It asks the user to add a skill-scoped env entry to ~/.openclaw/openclaw.json (normal for skill-level config). The scripts do not modify other skills or system-wide config beyond advising how to add its own setting.
What to consider before installing
This skill appears to implement exactly what it claims (ModelScope text/image/vision) but it requires MODELSCOPE_API_KEY(s) and the registry metadata does not declare that — so the platform record is incomplete. Before installing: 1) Verify you are comfortable storing MODELSCOPE_API_KEY in ~/.openclaw/openclaw.json (this stores keys in plaintext) or prefer exporting the env var instead. 2) Prefer creating a limited-privilege / limited-quota ModelScope key and rotate it if exposed. 3) Be aware the scripts print the first ~12 chars of each API key to stderr; if you collect logs or share stderr, that could leak key fragments — consider removing or changing that logging in common.py. 4) Confirm network access to https://api-inference.modelscope.cn/ is acceptable and that you consent to sending images/prompts to that service (images are uploaded/base64-encoded). 5) Ask the publisher or registry maintainer to update the skill metadata to declare MODELSCOPE_API_KEY as a required credential so the requirement is visible prior to install. If you want higher assurance, inspect the included Python files locally (they are small and readable) before running.

Like a lobster shell, security has layers — review code before you run it.

latestvk972yez29dpqfbffg0035yygj983z7ew
168downloads
0stars
8versions
Updated 4w ago
v1.2.1
MIT-0

MS-ai — ModelScope AI 技能

通过 ModelScope API 进行文生图、图生图、视觉理解和文本生成,遇到 429 限速自动轮换模型。

前置条件

1. 安装依赖

pip install requests Pillow

2. 配置 API Key

编辑 ~/.openclaw/openclaw.json,在 skills.entries 下添加:

{
  "skills": {
    "entries": {
      "ms-ai": {
        "enabled": true,
        "env": {
          "MODELSCOPE_API_KEY": "ms-key1,ms-key2,ms-key3"
        }
      }
    }
  }
}

多个 Key 用逗号 , 分隔。当一个 Key 的所有模型都失败时,自动切换到下一个 Key 重试全部模型。

💡 也可以使用环境变量 export MODELSCOPE_API_KEY=key1,key2,但推荐用 Skill 级别配置,更整洁且仅对该 Skill 生效。

⚠️ 本技能调用 ModelScope API 会消耗额度。如果用户没有明确说"使用ms-ai",必须先告知并确认后再执行。

功能概览

功能脚本说明
文生图generate.py文字描述 → 图片
图生图generate.py --image上传图片 + 描述 → 修改后的图片
视觉理解vision.py分析图片内容、OCR、场景识别
文本生成text.py多模型对话/文本生成

图片生成 (generate.py)

# 文生图(默认 1920x1080 横屏)
python3 <skill_dir>/scripts/generate.py --prompt "一只金色的猫" --output cat.jpg

# ⭐ 推荐:用 --aspect 指定场景(agent 首选方式)
python3 <skill_dir>/scripts/generate.py --prompt "PPT封面" --aspect ppt --output slide.png
python3 <skill_dir>/scripts/generate.py --prompt "抖音短视频" --aspect douyin --output video.png
python3 <skill_dir>/scripts/generate.py --prompt "竖版海报" --aspect poster --output poster.png

# 也可以直接传宽高比
python3 <skill_dir>/scripts/generate.py --prompt "风景画" --aspect 16:9 --output landscape.jpg

# 图生图
python3 <skill_dir>/scripts/generate.py --prompt "给狗戴上生日帽" --image dog.png --output result.jpg

# 精确指定像素尺寸(最高优先级)
python3 <skill_dir>/scripts/generate.py --prompt "自定义" --width 2560 --height 1440 --output custom.jpg
参数说明默认值
--prompt描述文字(必填)
--output输出文件路径output.jpg
--model指定模型(别名或完整ID)自动轮换
--image输入图片(图生图,可多次指定)
--aspect宽高比预设或场景名(推荐)
--width图片宽度1920
--height图片高度1080
--loraLoRA 模型 ID

📐 优先级--aspect > --width/--height > 默认 1920×1080

🎯 场景→尺寸速查表(Agent 必读!)

生图前必须根据使用场景选择正确的 --aspect 参数:

场景--aspect尺寸说明
PPT / 幻灯片ppt16:91920×1080横屏演示
横屏视频video-h16:91920×1080YouTube、B站
竖屏短视频video-v / douyin / 9:161080×1920抖音、快手、Reels
文章/视频封面cover16:91920×1080公众号、头条
微信公众号weixin16:91920×1080图文封面
竖版海报poster2:31200×1800活动海报、宣传
社交媒体social1:11024×1024微博、朋友圈
头像avatar1:11024×1024个人头像
电影画幅cinema21:92560×1080超宽屏
摄影/印刷photo3:21800×1200标准摄影
A4 文档print-a43:41152×1536竖版文档
iPad 横屏4:31536×1152传统比例

⚠️ Agent 调用规则:每次生图前,先判断使用场景,再传 --aspect。不确定时默认 16:9(横屏)。

视觉理解 (vision.py)

python3 <skill_dir>/scripts/vision.py --image photo.jpg --prompt "描述这张图片"
python3 <skill_dir>/scripts/vision.py --image screenshot.png --prompt "图片里有什么文字?"
参数说明默认值
--prompt分析提示词(必填)
--image输入图片路径
--model指定模型自动轮换
--stdin-b64从 stdin 读取 base64 图片

--image--stdin-b64 二选一。涉及视觉理解任务时优先使用此脚本。

文本生成 (text.py)

python3 <skill_dir>/scripts/text.py --prompt "解释量子计算"
python3 <skill_dir>/scripts/text.py --prompt "写一首诗" --stream
python3 <skill_dir>/scripts/text.py --prompt "继续" --history history.json --output history.json
参数说明默认值
--prompt用户输入(必填)
--model指定模型自动轮换
--max-tokens最大输出 token 数4096
--temperature温度(0-2)0.7
--stream流式输出关闭
--history历史消息 JSON 文件

模型配置

文生图模型

优先级模型 ID别名
1FireRedTeam/FireRed-Image-Edit-1.1firered
2Qwen/Qwen-Image-2512qwen
3Qwen/Qwen-Image-Edit-2511edit
4Tongyi-MAI/Z-Image-Turboturbo

视觉理解 / 文本生成模型

优先级模型 ID别名
1moonshotai/Kimi-K2.5kimi
2ZhipuAI/GLM-5glm
3MiniMax/MiniMax-M2.5minimax
4Qwen/Qwen3.5-397B-A17Bqwen
5XiaomiMiMo/MiMo-V2-Flash(仅text)mimo

轮换机制(Key × 模型双重轮换)

策略:先换 Key,每个 Key 试完所有模型,再换下一个 Key

  1. 使用 Key1,按优先级依次尝试所有模型
  2. Key1 的所有模型都失败 → 切换到 Key2,重新尝试所有模型
  3. 所有 Key × 所有模型都失败 → 报错退出

总共最多尝试 n_keys × n_models 次。遇到 429 限速等待 10 秒后换下一个模型。

故障排除

问题原因解决方案
未设置 MODELSCOPE_API_KEY未配置 API Keyskills.entries.ms-ai.env 中配置
429 Too Many Requests频率限制脚本自动轮换,等待重试
所有模型都失败额度用尽或全部限速等待后重试
ImportError: requests缺少依赖pip install requests Pillow

更新日志

v1.2.0 (2026-03-26)

  • --aspect 预设参数:支持宽高比(16:9, 9:16 等)和场景名(ppt, douyin, poster 等)
  • 默认尺寸:从 1024×1024 改为 1920×1080(16:9 横屏,覆盖 PPT/视频/封面等主流场景)
  • 场景速查表:SKILL.md 新增 场景→尺寸 对照表,agent 一眼选对
  • 支持 8 种宽高比 + 13 个场景别名

v1.1.0 (2026-03-24)

  • Key 轮换:支持多个 API Key(逗号分隔),所有模型失败后自动切换下一个 Key
  • 新增 common.py 共享模块,统一 Key + 模型双重轮换逻辑
  • 轮换策略:先换 Key → 每个 Key 试完所有模型 → 再换下一个 Key

v1.0.2 (2026-03-24)

  • 配置方式统一为 Skill 级别 env(openclaw.json skills.entries)
  • 文档结构优化

v1.0.1 (2026-03-24)

  • 移除 openclaw.json tools 自定义键配置

v1.0.0 (2026-03-24)

  • 首次发布:文生图、图生图、视觉理解、文本生成
  • 模型自动轮换(429 限速处理)

Comments

Loading comments...