Model Audit Pro

Monthly LLM stack audit — compare your current models against latest benchmarks and pricing from OpenRouter. Identifies potential savings, upgrades, and bett...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 420 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description: audit LLM stack vs OpenRouter pricing. Declared requirement: OPENROUTER_API_KEY. The included script calls OpenRouter's models API, parses pricing, classifies models, and reads local openclaw.json to discover configured models — all coherent with the stated purpose.
Instruction Scope
SKILL.md instructs running scripts/model_audit.py. The script performs expected actions: HTTP GET to https://openrouter.ai/api/v1/models, reads openclaw.json from a small set of plausible locations (~/.openclaw and common container/root paths), and prints or emits JSON results. It does not transmit data to any unexpected third-party endpoints nor attempt arbitrary shell execution.
Install Mechanism
No install spec — instruction-only with a bundled Python script. Nothing is downloaded or installed automatically; the script runs with the environment and Python already present.
Credentials
Only required environment variable is OPENROUTER_API_KEY (declared as primary credential). No other secrets or unrelated credentials are requested. The script reads no other environment variables.
Persistence & Privilege
Skill is not always-enabled and does not modify other skills or system-wide config. It only reads local config files and makes outbound requests; it does not write persistent data or claim elevated privileges.
Assessment
This skill appears to do what it claims, but take standard precautions before running code from unknown publishers: (1) Only provide OPENROUTER_API_KEY if you trust the service and understand the API key's scope/billing implications. (2) Inspect openclaw.json in the paths the script checks — it may contain models or other config; ensure it contains no sensitive secrets you don't want the script to read. (3) Run the script in a non-root, network-restricted environment if you're cautious (the script will attempt outbound HTTP to openrouter.ai). (4) Because the full script is included, you can review it yourself (or run with --json) to confirm output before integrating. If you need higher assurance, verify the publisher (homepage, GitHub) and review recent updates or community feedback.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
agxntsixvk97e521y41rfskxxa6qb7mgvbn8161xslatestvk97e521y41rfskxxa6qb7mgvbn8161xs

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🔬 Clawdis
EnvOPENROUTER_API_KEY
Primary envOPENROUTER_API_KEY

SKILL.md

Model Audit 📊

Audit your LLM stack against current pricing and alternatives.

Fetches live pricing from OpenRouter, analyzes your configured models, and recommends potential savings or upgrades by category.

Quick Start

# Full audit with recommendations
python3 {baseDir}/scripts/model_audit.py

# JSON output
python3 {baseDir}/scripts/model_audit.py --json

# Audit specific models
python3 {baseDir}/scripts/model_audit.py --models "anthropic/claude-opus-4-6,openai/gpt-4o"

# Show top models by category
python3 {baseDir}/scripts/model_audit.py --top

# Compare two models
python3 {baseDir}/scripts/model_audit.py --compare "anthropic/claude-sonnet-4" "openai/gpt-4o"

What It Does

  1. Fetches live pricing from OpenRouter API
  2. Reads your configured models from openclaw.json
  3. Categorizes models (reasoning, code, fast, cheap, vision)
  4. Compares against top alternatives in each category
  5. Calculates potential monthly savings
  6. Recommends upgrades or cost optimizations

Output Example

═══ LLM Stack Audit ═══

Your Models:
  anthropic/claude-opus-4-6    $5.00/$25.00 per 1M tokens (in/out)
  openai/gpt-4o              $2.50/$10.00 per 1M tokens
  google/gemini-2.0-flash     $0.10/$0.40 per 1M tokens

Recommendations:
  💡 For fast tasks: gemini-2.0-flash is 50x cheaper than opus
  💡 Consider: deepseek/deepseek-r1 for reasoning at $0.55/$2.19
  💡 Your stack covers: reasoning ✓, code ✓, fast ✓, vision ✓

Environment

Requires OPENROUTER_API_KEY environment variable.

Credits

Built by M. Abidi | agxntsix.ai YouTube | GitHub Part of the AgxntSix Skill Suite for OpenClaw agents.

📅 Need help setting up OpenClaw for your business? Book a free consultation

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…