Model Audit

v1.0.0

Monthly LLM stack audit — compare your current models against latest benchmarks and pricing from OpenRouter. Identifies potential savings, upgrades, and bett...

0· 258·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (LLM stack audit) matches required items: the code fetches model/pricing from OpenRouter and reads openclaw.json to discover configured models. The declared primary env var OPENROUTER_API_KEY is appropriate and no unrelated environment variables or binaries are requested.
Instruction Scope
SKILL.md instructs running the included Python script which (as expected) calls the OpenRouter API and reads openclaw.json from standard locations (~/.openclaw and a couple of common container/root paths). Reading that config is within scope, but it does access files on-disk (including /root/.openclaw and /home/node/.openclaw) — review those files for sensitive data before running.
Install Mechanism
No install spec; this is instruction-only plus an included Python script. Nothing is downloaded at runtime by the skill itself and no package installation is required, so install risk is low.
Credentials
Only OPENROUTER_API_KEY is required and used as the Authorization header to call openrouter.ai. This is proportional to the stated purpose. Note: the script reads openclaw.json which may contain other data; the script only extracts model IDs but you should verify your config doesn't include other secrets you don't want read.
Persistence & Privilege
The skill is not always-enabled, does not request persistent system-wide privileges, and does not modify other skills' configuration. It can be invoked autonomously (platform default) but that is normal and not combined with other red flags.
Assessment
This skill appears to do what it says: it calls openrouter.ai using OPENROUTER_API_KEY and inspects your openclaw.json for model IDs. Before installing or running: (1) ensure the OPENROUTER_API_KEY you provide is for an account you trust and rotate/restrict it if possible; (2) review your openclaw.json (and the paths the script checks) for any secrets you don't want read — the script only extracts model IDs but will open the file; (3) you can inspect scripts/model_audit.py (included) yourself — network activity is limited to https://openrouter.ai/api/v1/models and no other remote endpoints are contacted; (4) run the script in a controlled environment if you are concerned about accidental exposure of config files. Overall the package is coherent and low-risk, but exercise standard caution with API keys and private config files.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔬 Clawdis
EnvOPENROUTER_API_KEY
Primary envOPENROUTER_API_KEY
latestvk97f4wp42nxq871zptad4mpd5n82ak3a
258downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Model Audit 📊

Audit your LLM stack against current pricing and alternatives.

Fetches live pricing from OpenRouter, analyzes your configured models, and recommends potential savings or upgrades by category.

Quick Start

# Full audit with recommendations
python3 {baseDir}/scripts/model_audit.py

# JSON output
python3 {baseDir}/scripts/model_audit.py --json

# Audit specific models
python3 {baseDir}/scripts/model_audit.py --models "anthropic/claude-opus-4-6,openai/gpt-4o"

# Show top models by category
python3 {baseDir}/scripts/model_audit.py --top

# Compare two models
python3 {baseDir}/scripts/model_audit.py --compare "anthropic/claude-sonnet-4" "openai/gpt-4o"

What It Does

  1. Fetches live pricing from OpenRouter API
  2. Reads your configured models from openclaw.json
  3. Categorizes models (reasoning, code, fast, cheap, vision)
  4. Compares against top alternatives in each category
  5. Calculates potential monthly savings
  6. Recommends upgrades or cost optimizations

Output Example

═══ LLM Stack Audit ═══

Your Models:
  anthropic/claude-opus-4-6    $5.00/$25.00 per 1M tokens (in/out)
  openai/gpt-4o              $2.50/$10.00 per 1M tokens
  google/gemini-2.0-flash     $0.10/$0.40 per 1M tokens

Recommendations:
  💡 For fast tasks: gemini-2.0-flash is 50x cheaper than opus
  💡 Consider: deepseek/deepseek-r1 for reasoning at $0.55/$2.19
  💡 Your stack covers: reasoning ✓, code ✓, fast ✓, vision ✓

Environment

Requires OPENROUTER_API_KEY environment variable.

Credits

Built by M. Abidi | agxntsix.ai YouTube | GitHub Part of the AgxntSix Skill Suite for OpenClaw agents.

📅 Need help setting up OpenClaw for your business? Book a free consultation

Comments

Loading comments...