Token Usage Monitor

v1.0.0

Monitor and display token usage metrics for AI models. Use when you need to track token consumption rates, view historical usage data, or get alerts about hi...

0· 195·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description align with included code and docs: the repo provides a Python tracker, CLI examples, and integration guidance for instrumenting model calls. No unrelated credentials, binaries, or services are requested.
Instruction Scope
Runtime instructions and SKILL.md focus on tracking, reporting, and alerting. The script persists data to a file (documented as ~/.openclaw/token_usage.json) and integration guidance instructs modifying session startup/heartbeat to call the tracker—this is expected for the stated purpose. Implementation detail: the script calls os.makedirs(os.path.dirname(self.data_file)) but does not expand the tilde (~) in the default path, so it may create a literal directory named '~' relative to the working directory instead of ~/.openclaw; this is a bug to be aware of but not a security red flag.
Install Mechanism
Instruction-only skill with an included Python script. No install spec or remote downloads; nothing will be fetched or executed from external URLs during install.
Credentials
The skill requests no environment variables, credentials, or config paths. All data is local to a JSON file. The integration examples mention optional email/webhook alerts but those are commented/outlines — adding those would require adding credentials later, so review any extensions before enabling.
Persistence & Privilege
The skill does not request 'always' inclusion and does not modify other skills or global agent settings. It writes a single local data file for its own usage and is invocable by the user or by explicit integration code.
Assessment
This skill appears coherent and local-only: it records token counts to a JSON file and offers CLI and integration points to instrument your model calls. Before installing: (1) inspect the Python file if you will run it; (2) note it will write usage data to a local file (documented as ~/.openclaw/token_usage.json) — the current implementation does not expand '~', so you may want to fix path handling (use os.path.expanduser) to ensure the file ends up where you expect; (3) integrating automatic tracking requires adding calls into your session wrapper/heartbeat as described — review those changes to avoid unintended behavior; (4) be cautious if you extend _check_thresholds to send emails or webhooks since that will require network endpoints and credentials; store any such credentials securely. Overall this is internally consistent with its stated purpose.

Like a lobster shell, security has layers — review code before you run it.

ai-costsvk970e4z4jg2ytn5t0xdznajprn8306s2cost-optimizationvk970e4z4jg2ytn5t0xdznajprn8306s2doubaovk970e4z4jg2ytn5t0xdznajprn8306s2latestvk970e4z4jg2ytn5t0xdznajprn8306s2openclawvk970e4z4jg2ytn5t0xdznajprn8306s2token-monitoringvk970e4z4jg2ytn5t0xdznajprn8306s2

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Token Usage Monitor

Overview

This skill provides comprehensive token usage monitoring and reporting capabilities for AI models. It helps you track token consumption in real-time, analyze historical usage patterns, and receive alerts when usage exceeds predefined thresholds. Ideal for optimizing prompt costs, controlling AI service expenses, and ensuring efficient use of model resources.

Core Capabilities

1. Real-Time Token Usage Monitoring

  • Track token consumption per request, per session, and per model
  • Monitor token usage speed (tokens per second/minute)
  • View live usage metrics including prompt tokens, completion tokens, and total tokens

2. Historical Usage Analysis

  • Generate usage reports for specified time periods (daily, weekly, monthly)
  • Analyze usage trends across different models and applications
  • Identify peak usage times and cost drivers

3. Threshold Alerts

  • Set custom token usage thresholds for different models or sessions
  • Receive notifications when usage exceeds defined limits
  • Configure alert channels (chat, email, or system notifications)

4. Cost Estimation

  • Calculate approximate costs based on token usage and model pricing
  • Compare costs across different models and providers
  • Optimize prompts to reduce token usage and costs

Quick Start

Monitor Current Session Usage

# Check current session token usage
python scripts/token_usage_tracker.py --session

Generate Daily Usage Report

# Generate report for today's usage
python scripts/token_usage_tracker.py --report --period day

Set Usage Threshold

# Set threshold of 100,000 tokens per day for GPT-4
python scripts/token_usage_tracker.py --set-threshold --model gpt-4 --limit 100000 --period day

Resources

scripts/

Create only the resource directories this skill actually needs. Delete this section if no resources are required.

scripts/

  • token_usage_tracker.py: Main script for tracking and reporting token usage

    Key features:

    • Tracks token usage per session, model, and time period
    • Generates daily usage reports with cost estimates
    • Supports custom usage thresholds and alerts
    • Provides real-time and historical usage analytics

    Usage examples:

    # Track a single usage event
    python scripts/token_usage_tracker.py --track --model doubao-seed --prompt-tokens 100 --completion-tokens 200
    
    # View current session usage
    python scripts/token_usage_tracker.py --session
    
    # Generate daily usage report
    python scripts/token_usage_tracker.py --report --period day
    
    # Set usage threshold (100,000 tokens/day for Doubao)
    python scripts/token_usage_tracker.py --set-threshold --model doubao-seed --limit 100000
    
    # View overall usage summary
    python scripts/token_usage_tracker.py --summary
    

Note: The script automatically creates and manages a data file at ~/.openclaw/token_usage.json to store usage data.

references/

Documentation and reference material intended to be loaded into context to inform Codex's process and thinking.

Examples from other skills:

  • Product management: communication.md, context_building.md - detailed workflow guides
  • BigQuery: API reference documentation and query examples
  • Finance: Schema documentation, company policies

Appropriate for: In-depth documentation, API references, database schemas, comprehensive guides, or any detailed information that Codex should reference while working.

assets/

Files not intended to be loaded into context, but rather used within the output Codex produces.

Examples from other skills:

  • Brand styling: PowerPoint template files (.pptx), logo files
  • Frontend builder: HTML/React boilerplate project directories
  • Typography: Font files (.ttf, .woff2)

Appropriate for: Templates, boilerplate code, document templates, images, icons, fonts, or any files meant to be copied or used in the final output.


Not every skill requires all three types of resources.

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…