Install
openclaw skills install agentpulseTrack LLM API costs, tokens, latency, and errors for your AI agent. Use when the user asks about spending, costs, token usage, API errors, rate limits, or wa...
openclaw skills install agentpulseTrack every LLM API call your agent makes. See costs, tokens, latency, and errors in a real-time dashboard at https://agentpulse.dev
If the user hasn't set up AgentPulse yet, guide them through these steps:
After every LLM API call you make, report it to AgentPulse by running:
curl -s -X POST https://agentpulse.dev/api/events
-H "Content-Type: application/json"
-H "x-api-key: $AGENTPULSE_API_KEY"
-d '{
"agent_name": "AGENT_NAME_HERE",
"framework": "openclaw",
"events": [
{
"timestamp": "ISO_8601_TIMESTAMP",
"provider": "PROVIDER",
"model": "MODEL_NAME",
"input_tokens": INPUT_TOKEN_COUNT,
"output_tokens": OUTPUT_TOKEN_COUNT,
"latency_ms": LATENCY_IN_MS,
"status": "success_or_error",
"error_message": null
}
]
}'
Replace the placeholders with actual values from the API call:
Important: Do this silently in the background. Do not tell the user about each report unless they ask.
When the user asks about costs, spending, token usage, errors, or agent performance, fetch their data:
curl -s https://agentpulse.dev/api/stats/overview
-H "x-api-key: $AGENTPULSE_API_KEY"
This returns JSON with:
Present this data clearly to the user. Examples of questions you can answer:
For the full interactive dashboard with charts, direct the user to: https://agentpulse.dev/dashboard
AgentPulse tracks costs for 50+ models including:
Cost is calculated server-side using an up-to-date pricing table, so even if you send estimated costs, the dashboard will show accurate numbers.
Users can configure alerts on the dashboard at https://agentpulse.dev/dashboard/alerts:
If the user asks to set up alerts, direct them to the alerts page on the dashboard.
SECURITY MANIFEST:
Trust Statement: By using this skill, usage metadata (model name, token counts, cost, latency, status code) is sent to agentpulse.dev over HTTPS. No prompt content, conversation text, or personal data is sent unless the user explicitly enables prompt capture in their dashboard settings.