Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Model Config Check

v1.0.0

校验模型配置是否正确、模型是否可以正常连接和返回内容。当用户说"检查模型"、"测试模型"、"模型能不能用"、"模型配置"、"诊断模型问题"时使用。**每次修改模型配置(config.patch/config.apply涉及models.providers)后必须自动执行校验。** 用户只给模型名+API key时...

0· 93·0 current·0 all-time
byPL Uncle@jasonzhang2015

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jasonzhang2015/model-config-check.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Model Config Check" (jasonzhang2015/model-config-check) from ClawHub.
Skill page: https://clawhub.ai/jasonzhang2015/model-config-check
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install model-config-check

ClawHub CLI

Package manager switcher

npx clawhub@latest install model-config-check
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The stated purpose (validate model configs, connectivity and model responses) matches the script's network and API checks. However SKILL.md also promises automatic creation and application of configs when the user supplies only model+API key and automatic triggering on gateway config changes; the provided script (scripts/check_models.sh) only reads ~/.openclaw/openclaw.json and validates providers — it does not implement auto-generation/appliance of config.patch or the 'auto-detect+write config' flow. Also the script relies on python3 and curl but the skill declares no required binaries.
!
Instruction Scope
Instructions require reading ~/.openclaw/openclaw.json and running the bundled check_models.sh — that is in-scope. But SKILL.md instructs additional behaviors (web searching to discover base URLs, writing config via gateway config.patch and restarting the gateway) that are not implemented in the script; these are broad actions that would modify configuration and trigger network calls. The script's extraction step prints provider apiKey into its tab-separated output (used internally), which increases the chance API keys are exposed in process output or logs.
Install Mechanism
No install spec (instruction-only + included shell script), so nothing is downloaded or installed by the skill itself. This is lower install risk. Note: runtime relies on system utilities (python3, curl, bash, mktemp) which are not declared in the skill metadata.
Credentials
The skill legitimately needs access to model provider configuration (api keys and base URLs) stored in ~/.openclaw/openclaw.json to perform checks. It requests no unrelated environment variables. However the script exposes apiKey values in the python extractor output (they are emitted into the PROVIDERS variable), which risks leaking secrets to logs or other observers — this handling is disproportionate if logs are not protected.
Persistence & Privilege
Skill metadata does not request always:true and is user-invocable, which is appropriate. SKILL.md's requirement that validation 'must automatically run' after gateway config.patch/config.apply/update.run implies integration with gateway event hooks or automatic invocation; the skill itself does not declare how it will be auto-triggered. If the platform wires this up to run automatically on config changes, that increases its blast radius and you should ensure proper authorization and auditing for automatic runs.
What to consider before installing
This skill appears to do what it says (check model provider configs and call providers to verify responses) but has a few red flags you should address before installing: (1) The SKILL.md promises auto-detect/auto-write of provider config when a user gives only model+API key, but the included script does not implement writing/applying configs — clarify or remove the promise. (2) The script uses python3 and curl at runtime but the skill metadata lists no required binaries; ensure those tools exist on hosts where it will run. (3) The extractor prints API keys into its tab-separated output, which can leak keys to stdout/logs — restrict log access or modify the script to avoid printing secrets. (4) The skill expects to be auto-run on gateway config changes; if you enable automatic invocation, ensure it runs under appropriate permissions and that audit/logging is in place. Recommended next steps: review and harden the script to avoid printing secrets, add declared required binaries, test in a staging environment, and only enable automatic triggers after you confirm safe invocation hooks and logging controls.

Like a lobster shell, security has layers — review code before you run it.

latestvk97agzp71hymzy0h3ck52dgm0d83y97t
93downloads
0stars
1versions
Updated 4w ago
v1.0.0
MIT-0

模型配置校验 Skill

⚡ 自动触发规则

当以下情况发生时,必须自动运行校验脚本,无需用户请求:

  1. 使用 gateway config.patch 修改了 models 相关配置(新增/修改 provider 或 model)
  2. 使用 gateway config.apply 替换了整个配置且包含 models 变更
  3. 使用 gateway update.run 更新后(模型接口可能有变化)

自动校验流程:

  1. 配置写入完成 + gateway 重启后,等待 5 秒让服务就绪
  2. 执行 bash ~/.openclaw/workspace/skills/model-config-check/scripts/check_models.sh
  3. 解析输出,向用户汇报校验结果
  4. 如果新模型不可用,立即告知用户具体原因和修复建议

手动触发: 用户说"检查模型"/"测试模型"/"模型能不能用"/"诊断模型"时也执行同样流程。

🤖 自动配置流程

当用户只提供「模型名 + API key」时,按以下流程自动完成配置:

Step 1: 识别 Provider

根据模型名前缀匹配已知 provider:

模型名前缀ProviderBase URLAPI 类型
gpt-*, o1-*, o3-*, o4-*, chatgpt-*OpenAIhttps://api.openai.comopenai-completions/v1/chat/completions
claude-*Anthropichttps://api.anthropic.comanthropic-messages/v1/messages
deepseek-*DeepSeekhttps://api.deepseek.comopenai-completions/v1/chat/completions
glm-*, chatglm-*智谱 (Zhipu)https://open.bigmodel.cn/api/paas/v4openai-completions/v1/chat/completions
qwen-*, qwq-*, qvq-*阿里通义 (DashScope)https://dashscope.aliyuncs.com/compatible-mode/v1openai-completions/v1/chat/completions
moonshot-*, kimi-*月之暗面 (Moonshot)https://api.moonshot.cn/v1openai-completions/v1/chat/completions
doubao-*, ep-*火山引擎 (豆包)https://ark.cn-beijing.volces.com/api/v3openai-completions/v1/chat/completions
minimax-*, abab-*MiniMaxhttps://api.minimax.chat/v1openai-completions/v1/chat/completions
yi-*零一万物 (01.AI)https://api.lingyiwanwu.com/v1openai-completions/v1/chat/completions
ernie-*, baidu-*百度文心https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshopopenai-completions
grok-*xAI (Grok)https://api.x.ai/v1openai-completions/v1/chat/completions
gemini-*Google Geminihttps://generativelanguage.googleapis.com/v1beta特殊接口,需单独处理
mistral-*, codestral-*Mistralhttps://api.mistral.ai/v1openai-completions/v1/chat/completions
mimo-*小米 (MiMo)https://api.xiaomimimo.com/anthropicanthropic-messages/v1/messages

未匹配时: 自动搜索 "[model_name] API documentation base_url" 确认配置。

Step 2: 生成配置

根据匹配结果生成 config.patch 格式的配置,包含:

  • provider 名称(用 provider 小写名作为 key)
  • baseUrl
  • apiKey(用户提供的)
  • api 类型
  • models 列表(至少包含用户提到的模型)
  • 每个 model 的 contextWindow 和 maxTokens(根据已知信息填写默认值)

Step 3: 应用配置

使用 gateway config.patch 写入配置并重启。

Step 4: 自动校验

重启完成后执行校验脚本,汇报结果。

模型上下文/输出默认值

常见模型的 contextWindow 和 maxTokens 参考值:

模型contextWindowmaxTokens
gpt-4o12800016384
gpt-4-turbo1280004096
o1/o1-mini128000100000
claude-3.5-sonnet2000008192
claude-3-opus2000004096
deepseek-chat640008192
deepseek-reasoner640008192
glm-41280004096
qwen-max320008192
qwen-long100000065536
kimi-latest1280004096
doubao-pro4096 (volcengine)4096
minimax-abab6.52457604096
mimo-v2-pro2621448192

未知模型: contextWindow 默认 128000,maxTokens 默认 4096。搜文档确认后更新。

按以下顺序逐项检查,每项给出 ✅/❌ 结果:

1. 读取配置

使用 read 工具读取 ~/.openclaw/openclaw.json,提取所有 models.providers 下的模型配置。

2. 配置完整性检查

对每个 provider 检查:

  • baseUrl 是否存在且格式正确(http/https 开头)
  • apiKey 是否存在且非空
  • api 类型是否正确(openai-completions / anthropic-messages / anthropic-completions
  • 每个 model 的 id 是否存在
  • contextWindowmaxTokens 是否设置

3. 网络连通性检查

使用 exec 执行 curl 测试每个 provider 的 baseUrl 是否可达:

curl -s -o /dev/null -w "%{http_code}" --connect-timeout 5 "<baseUrl>"

4. API 实际调用测试

对每个 provider,使用 exec 执行实际 API 调用测试:

OpenAI 兼容接口 (openai-completions)

curl -s -X POST "<baseUrl>" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <apiKey>" \
  -d '{"model":"<modelId>","messages":[{"role":"user","content":"reply 1"}],"max_tokens":10}' \
  --connect-timeout 10 --max-time 30

Anthropic 兼容接口 (anthropic-messages)

curl -s -X POST "<baseUrl>/v1/messages" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <apiKey>" \
  -H "anthropic-version: 2023-06-01" \
  -d '{"model":"<modelId>","max_tokens":10,"messages":[{"role":"user","content":"reply 1"}]}' \
  --connect-timeout 10 --max-time 30

注意:对于 anthropic-messages,baseUrl 通常不带 /v1/messages,需要拼接。有些 baseUrl 已经包含路径,则直接用。

5. 结果解析

检查返回结果:

  • HTTP 状态码是否为 200
  • 响应体是否包含有效内容(非空)
  • 对于 OpenAI 接口:检查 choices[0].message.content 是否非空
  • 对于 Anthropic 接口:检查 content[0].text 是否非空
  • 如果 content 为空但 reasoning_content 有值,说明模型把输出放到了思考字段,需标注配置问题

6. 生成报告

汇总输出格式:

## 模型配置校验报告

### Provider: <name>
- 配置完整性: ✅/❌
- 网络连通: ✅/❌ (HTTP <code>)
- API 认证: ✅/❌
- 模型返回: ✅/❌
- 模型: <modelId> → 状态: ✅/❌ [备注]

### 总结
- 可用模型: X/Y
- 不可用模型: [列表]
- 建议: [修复建议]

URL 路径处理规则

不同 provider 的 baseUrl 结构不同,脚本自动处理:

API 类型处理规则
anthropic-messages自动追加 /v1/messages(除非 URL 已包含)
openai-completions自动追加 /chat/completions(除非 URL 已以 /chat/completions/completions 结尾)

常见问题及修复

问题原因修复方式
HTTP 401/403API Key 无效或过期更新 apiKey
HTTP 500服务端错误/账户耗尽检查账户余额或联系服务商
连接超时baseUrl 不可达检查网络或更换 baseUrl
content 为空接口类型不匹配检查 api 字段是否正确(openai vs anthropic)
content 为空但 reasoning 有值模型输出到了思考字段增加 max_tokens 或切换接口类型
Relay service error中转服务异常检查中转服务状态和账户
THINKING_ONLY推理模型思考未完成正常现象,模型可用,增加 max_tokens 可获取完整输出

Comments

Loading comments...