Install
openclaw skills install llm-speedtestPing major LLM providers in parallel and compare real API latency. Run with /ping
openclaw skills install llm-speedtestPing major LLM providers in parallel and compare real API latency (TTFT).
/ping or asks about model latency/speedRuns scripts/ping.sh which:
pass shared/ (users may need to adapt key sourcing for their setup)curl requests to each provider with a minimal prompt ("hi", max_tokens=1)Results are sorted fastest-to-slowest with color badges:
Example:
⚡ Model Latency — 14:32
🟢 `Gemini 412ms`
🟢 `GPT-4o 623ms`
🟢 `Sonnet 891ms`
🟡 `Grok 2104ms`
🟡 `MiniMax 3210ms`
🟡 `Opus 4102ms`
_real API latency (TTFT)_
| Provider | Model |
|---|---|
| Anthropic | Claude Sonnet 4 |
| Anthropic | Claude Opus 4 |
| OpenAI | GPT-4o-mini |
| Gemini 2.5 Flash | |
| MiniMax | MiniMax-M1 |
| xAI | Grok 3 Mini Fast |
~$0.0001 per run (1 token per model, cheapest tiers).
This skill uses pass shared/ for API key retrieval. If you don't use pass, you'll need to adapt scripts/ping.sh to source keys from environment variables or another secret manager.