API Benchmark

PassAudited by ClawScan on Feb 19, 2026.

Overview

The skill's code and runtime instructions match its stated purpose of benchmarking LLM provider response speed and only require reading an OpenClaw provider config and making API calls to the configured endpoints.

This skill appears to do what it says: it reads your OpenClaw provider config and sends test prompts to the provider base URLs to measure token-generation timings. Before installing/running: 1) inspect ~/.openclaw/openclaw.json (or the file pointed to by OPENCLAW_CONFIG) to ensure it contains only providers and API keys you trust (prefer ${ENV_VAR} placeholders to avoid storing secrets in files); 2) be aware the tool will transmit prompts and authentication headers to the configured baseUrl(s) — do not point it at unknown or untrusted endpoints; 3) run it in a safe environment or with test keys if you’re unsure; and 4) ensure Python3 and the requests package are available. If you want further assurance, review the full main.py (already included) to confirm there are no unexpected network calls or logging of secrets.