Token Counter

v1.0.0

Track and analyze OpenClaw token usage across main, cron, and sub-agent sessions with category, client, model, and tool attribution. Use when the user asks where tokens are being spent, wants daily/weekly token reports, needs per-session drilldowns, or is planning token-cost optimizations and needs evidence from transcript data.

2· 681·1 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name/description (token usage reporting) matches the code's behavior (reads local OpenClaw transcripts and produces reports). However, the SKILL.md examples call an executable named token-counter (hyphen) while the repo contains scripts/token_counter.py (underscore) and there is no install step to create that wrapper. The Python script also embeds default paths under /Users/mikek/.openclaw rather than relying only on the environment variables mentioned in the docs — this divergence is likely accidental but should be fixed or documented.
Instruction Scope
Instructions explicitly direct the skill to read local OpenClaw session index, .jsonl transcripts, and cron/job definitions and to write JSON snapshots to the workspace — this is consistent with the stated purpose. Note: reading session transcripts will expose potentially sensitive conversation content and metadata; the instructions don't attempt to read unrelated system files or network endpoints.
Install Mechanism
There is no install spec (instruction-only skill with a shipped script), so nothing will be downloaded or executed automatically. Risk from installation is low, but the lack of an install step also explains the missing wrapper/executable mentioned above.
!
Credentials
The skill declares no required env vars or credentials (appropriate), but the docs reference $OPENCLAW_* env vars while the Python script uses hard-coded defaults in /Users/mikek/.openclaw. This mismatch could cause accidental reading/writing of unexpected paths or make the script fail silently. Also the classification rules include specific personal markers (email/domain strings) — those are used for attribution but are static and possibly privacy-sensitive.
Persistence & Privilege
The skill is not always-enabled and is user-invocable. It does not request elevated privileges, nor does it modify other skills or global agent settings according to the provided files.
What to consider before installing
This skill appears to do what it says: parse local OpenClaw session data and produce token-usage reports. Before installing or running it, do the following: (1) Inspect scripts/token_counter.py locally (you already have it) to ensure no unexpected network or credential use — the script appears local-only. (2) Note that SKILL.md expects an executable named token-counter but the shipped file is token_counter.py; run it explicitly with python3 or add a wrapper. (3) Be aware the tool reads session transcripts and cron payloads (sensitive data). If you don't want it to access your real OpenClaw data, run it in a sandbox or point it to a copy by setting the appropriate env vars/CLI args. (4) Consider editing the hard-coded DEFAULT_* paths to use your environment variables or explicit CLI arguments to avoid accidental reads of /Users/mikek/... . (5) If you need higher assurance, run the script in a restricted environment and grep for any network/socket calls before allowing it to access production data.

Like a lobster shell, security has layers — review code before you run it.

latestvk9784f12xxtrkakvcdm3kga7g1811hjj

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Token Counter

Overview

Use this skill to produce token usage reports from local OpenClaw data. It parses session transcripts (.jsonl), session metadata, and cron definitions, then reports usage by category, client, tool, model, and top token consumers.

Quick Start

Run:

$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter --period 7d

Common Commands

  1. Basic report:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter --period 7d
  1. Focus on selected breakdowns:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
  --period 1d \
  --breakdown tools,category,client
  1. Analyze one session:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
  --session agent:main:cron:d3d76f7a-7090-41c3-bb19-e2324093f9b1
  1. Export JSON:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
  --period 30d \
  --format json \
  --output $OPENCLAW_WORKSPACE/token-usage/token-usage-30d.json
  1. Persist daily snapshot:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
  --period 1d \
  --save

This writes JSON to: $OPENCLAW_WORKSPACE/token-usage/daily/YYYY-MM-DD.json

Defaults and Data Sources

  • Sessions index: $OPENCLAW_DATA_DIR/agents/main/sessions/sessions.json
  • Session transcripts: $OPENCLAW_DATA_DIR/agents/main/sessions/*.jsonl
  • Cron definitions: $OPENCLAW_DATA_DIR/cron/jobs.json

The parser reads assistant usage fields for token counts and uses tool-call records for attribution.

Notes on Attribution

  • Tool token attribution is heuristic: assistant-message tokens are split across tool calls in that message.
  • Session totalTokens may come from either session index metadata or transcript usage sums (max is used).
  • Client detection is rules-based (personal, bonsai, mixed, unknown) using path/domain/email markers.

Validation

Run:

python3 $OPENCLAW_SKILLS_DIR/skill-creator/scripts/quick_validate.py \
  $OPENCLAW_SKILLS_DIR/token-counter

References

See:

  • references/classification-rules.md for category/client detection logic and keyword mapping.

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…