Install
openclaw skills install ping-modelMeasure and display AI model response latency. Use when the user types /ping or /ping followed by a model name to test round-trip time. Captures precise timi...
openclaw skills install ping-modelMeasure AI model response latency with consistent formatting.
bash command:"node {baseDir}/ping-model.js"
bash command:"node {baseDir}/ping-model.js --model minimax"
bash command:"node {baseDir}/ping-model.js --compare kimi,minimax,deepseek"
| Command | Description |
|---|---|
/ping | Ping current active model |
/ping kimi | Switch to kimi, ping, return |
/ping minimax | Switch to minimax, ping, return |
/ping deepseek | Switch to deepseek, ping, return |
/ping all | Compare all available models |
Required format - ALWAYS use this exact structure:
🧪 PING {model-name}
📤 Sent: {HH:MM:SS.mmm}
📥 Received: {HH:MM:SS.mmm}
⏱️ Latency: {formatted-duration}
🎯 Pong!
XXXms (e.g., 847ms)X.XXs (e.g., 1.23s)X.XXmin (e.g., 2.5min)Fast response (< 1s):
🧪 PING kimi
📤 Sent: 09:34:15.123
📥 Received: 09:34:15.247
⏱️ Latency: 124ms
🎯 Pong!
Medium response (1-60s):
🧪 PING minimax
📤 Sent: 09:34:15.123
📥 Received: 09:34:16.456
⏱️ Latency: 1.33s
🎯 Pong!
Slow response (> 60s):
🧪 PING gemini
📤 Sent: 09:34:15.123
📥 Received: 09:35:25.456
⏱️ Latency: 1.17min
🎯 Pong!
When testing a non-active model:
Critical: Always return to the original model after testing.
bash command:"node {baseDir}/ping-model.js --compare kimi,minimax,deepseek,gpt"
Output format:
══════════════════════════════════════════════════
🧪 MODEL COMPARISON
══════════════════════════════════════════════════
🥇 kimi 124ms
🥈 minimax 1.33s
🥉 deepseek 2.45s
4️⃣ openai 5.67s
🏆 Fastest: kimi (124ms)
The ping latency is measured as the time between:
This captures the model's internal processing time, not network latency.