opencode-model-benchmark

v1.0.0

OpenCode Zen 免费模型基准测试工具。 当用户想测试 OpenCode Zen 平台上免费模型的性能(响应时间、tokens/s、生成速度)时使用此 Skill。 触发词:模型测试、性能测试、基准测试、benchmark、测速、tokens/s、吞吐量、opencode 测试、免费模型速度比较。 该 S...

0· 102·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for braveheartzjh/opencode-model-benchmark.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "opencode-model-benchmark" (braveheartzjh/opencode-model-benchmark) from ClawHub.
Skill page: https://clawhub.ai/braveheartzjh/opencode-model-benchmark
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install opencode-model-benchmark

ClawHub CLI

Package manager switcher

npx clawhub@latest install opencode-model-benchmark
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description, SKILL.md, and the script all focus on fetching a model list from opencode.ai/docs and calling the opencode.ai/zen/v1 API to measure response time and tokens/s. The required actions (HTTP requests to the docs and to the API endpoints) are appropriate for this purpose.
Instruction Scope
SKILL.md directs running the included script which: fetches the official docs, iterates known models, calls either /chat/completions or /responses, and prints a Markdown report to stdout. The script does not read local secret files, does not exfiltrate data to unexpected endpoints, and does not write files (report is printed).
Install Mechanism
No install spec is provided and the skill is instruction+script only. The python script uses standard library modules (urllib, json, time) and does not download or extract additional code at install time.
Credentials
The skill requires no environment variables, keys, or special config paths. The script runs without credentials and only makes outbound HTTP requests to opencode.ai as described in the docs.
Persistence & Privilege
always is false and the skill does not request persistent system presence or modify other skills or system configuration. It executes ad-hoc when invoked and does not enable privileged behavior.
Assessment
This skill appears coherent and low-risk: it only makes HTTP requests to opencode.ai (the docs page and the zen API) and prints a Markdown report to stdout. Before installing, confirm you are comfortable with outbound network calls to https://opencode.ai and that running the script from your agent is acceptable. No API keys or local secrets are requested. If you operate in a restricted environment, consider running the script in a controlled session or sandbox and review network logs to verify calls go only to the expected opencode.ai endpoints.

Like a lobster shell, security has layers — review code before you run it.

latestvk97eay7bsxe6hwt3j5hkbwt5rd855fz1
102downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

OpenCode Zen 免费模型基准测试

概述

对 OpenCode Zen 平台 (https://opencode.ai/zen/v1) 上的所有免费模型进行性能基准测试:

  • 测量每个模型的响应时间(秒)
  • 测量每个模型的生成速度(tokens/s)
  • 统计 prompt tokens / completion tokens
  • 直接在对话窗口输出 Markdown 测试报告(按 tokens/s 排名),不生成文件

无需 API Key,直接调用 https://opencode.ai/zen/v1


使用方式

标准用法(报告直接输出到对话窗口)

python3 ~/.workbuddy/skills/opencode-model-benchmark/scripts/benchmark.py

脚本会:

  1. 自动获取最新免费模型清单(从 OpenCode 官方文档)
  2. 依次测试列表中的每个模型
  3. 每个模型发送一个标准测试 Prompt
  4. 记录响应时间、token 用量和生成速度
  5. 将完整 Markdown 报告直接输出到 stdout,在对话窗口展示
  6. 在终端打印快速汇总

工作流程

  1. 运行测试脚本
    python3 ~/.workbuddy/skills/opencode-model-benchmark/scripts/benchmark.py
    
  2. 自动获取模型:脚本启动时从官方文档爬取最新免费模型清单(约 1-2 秒)
  3. 等待测试完成:测试约需 1–3 分钟(视网络情况和模型响应)
  4. 展示报告:测试完成后,将脚本 stdout 输出的完整 Markdown 报告直接呈现在对话窗口中,无需生成文件或调用 open_result_view

免费模型获取

脚本启动时自动从 OpenCode 官方文档 (https://opencode.ai/docs/zh-cn/zen) 爬取最新免费模型清单:

  • 实时获取:每次运行自动拉取最新列表
  • 智能过滤:文档中已下架的模型会标记为 ❌ 失败
  • 完全独立:不依赖任何外部 skill 或手动维护

模型清单由内置 _KNOWN_MODELS 定义 + 在线验证机制保证最新。


配置参数(脚本顶部)

参数默认值说明
BASE_URLhttps://opencode.ai/zen/v1API 基础地址
TIMEOUT60单次请求超时
TEST_PROMPT数数 1–20标准测试提示词

注意事项

  • 部分免费模型为限时免费,如测试返回 HTTP 402/403,说明该模型已停止免费服务
  • 测试结果受网络波动影响,可多次运行取平均值
  • gpt-5-nano 使用 /responses 端点,与其他模型不同
  • 测试间有 1 秒间隔,防止触发频率限制

Comments

Loading comments...