Skill
Ping major LLM providers in parallel and compare real API latency. Run with /ping
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 0 · 185 · 0 current installs · 0 all-time installs
bychapati@chapati23
MIT-0
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
Name/description match the implementation: scripts/ping.sh makes parallel requests to multiple LLM provider APIs and measures latency. The providers and models listed in SKILL.md align with the endpoints called in the script.
Instruction Scope
SKILL.md correctly instructs running scripts/ping.sh and documents that it uses `pass shared/` to retrieve API keys. The script only sends minimal prompts and discards response bodies, returning timing results. It does not attempt to read unrelated system files or exfiltrate data to third parties. However, SKILL.md suggests optionally adapting to environment variables even though the shipped script does not read them; that mismatch is a small scope ambiguity the user should address.
Install Mechanism
No install spec (instruction-only with one script). Nothing is downloaded or written to disk beyond a short-lived temp dir created at runtime. This is low install risk.
Credentials
Registry metadata lists no required env vars or required binaries, but SKILL.md lists optional API keys and the script actually retrieves keys from `pass shared/...`. The script implicitly requires the `pass` binary plus common utilities (curl, bc, mktemp, sort). The requested credentials (provider API keys) are appropriate for the stated purpose, but the metadata omission of required binaries and the mismatch between env-vars listed in SKILL.md and the script's actual secret sourcing is an inconsistency worth noting.
Persistence & Privilege
The skill does not request permanent presence (always: false), does not modify other skills or system configuration, and does not store credentials or enable itself. It runs ephemeral network requests only when invoked.
What to consider before installing
This script will read your LLM provider API keys (it expects them in pass at shared/<provider>/api-key) and send tiny requests to each provider to measure latency. Before using: (1) Confirm you have and trust the providers whose keys will be used; the script will transmit your keys to those provider APIs as part of normal requests. (2) Ensure you have the required binaries installed (pass, curl, bc, mktemp) — the registry metadata does not declare these but the script requires them. (3) If you prefer environment variables, adapt the script to source keys from env vars (the shipped script does not read them). (4) Note Google API key is used in a query parameter (may appear in logs/proxy traces); consider using header-based auth if preferred. (5) Run the script in a safe/test environment first and inspect it locally; it deletes temp files and discards response bodies, but verify it behaves as expected. The inconsistencies are likely sloppy metadata, not malicious intent, but review and adapt the script before granting it access to real credentials.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
LLM Speedtest
Ping major LLM providers in parallel and compare real API latency (TTFT).
When to Use
- User types
/pingor asks about model latency/speed - Comparing provider response times
- Checking if a specific provider is slow or down
How It Works
Runs scripts/ping.sh which:
- Retrieves API keys from
pass shared/(users may need to adapt key sourcing for their setup) - Fires parallel
curlrequests to each provider with a minimal prompt ("hi",max_tokens=1) - Measures total round-trip time per provider
- Sorts results by latency and displays with color badges
Output Format
Results are sorted fastest-to-slowest with color badges:
- 🟢 < 2s — Fast
- 🟡 2–5s — Normal
- 🔴 5–30s — Slow
- ⚫ 30s — Timeout
Example:
⚡ Model Latency — 14:32
🟢 `Gemini 412ms`
🟢 `GPT-4o 623ms`
🟢 `Sonnet 891ms`
🟡 `Grok 2104ms`
🟡 `MiniMax 3210ms`
🟡 `Opus 4102ms`
_real API latency (TTFT)_
Models Tested
| Provider | Model |
|---|---|
| Anthropic | Claude Sonnet 4 |
| Anthropic | Claude Opus 4 |
| OpenAI | GPT-4o-mini |
| Gemini 2.5 Flash | |
| MiniMax | MiniMax-M1 |
| xAI | Grok 3 Mini Fast |
Cost
~$0.0001 per run (1 token per model, cheapest tiers).
Note
This skill uses pass shared/ for API key retrieval. If you don't use pass, you'll need to adapt scripts/ping.sh to source keys from environment variables or another secret manager.
Files
2 totalSelect a file
Select a file to preview.
Comments
Loading comments…
