NadirClaw

v1.0.0

Install, configure, and run NadirClaw LLM router to cut AI API costs by 40-70%. Use when the user wants to reduce LLM spending, route prompts to cheaper mode...

0· 265·0 current·0 all-time
byDor Amir@doramirdor
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (LLM router, cost savings) aligns with the instructions: pip install a package, run its setup, and start a local server that OpenClaw and OpenAI-compatible tools can point at. However the package has no homepage or provenance in the metadata, so while the capabilities match the purpose, the lack of source/author information is notable.
!
Instruction Scope
SKILL.md and scripts/install.sh instruct installing a third-party Python package and running 'nadirclaw serve' as a background daemon (nohup). It also runs 'nadirclaw openclaw onboard', which modifies OpenClaw configuration. Those actions go beyond passive guidance: they install software, modify local config, and start a network-facing service that will receive and forward user prompts — all of which could expose or route sensitive data if the package behaves maliciously.
!
Install Mechanism
Install occurs via 'pip install nadirclaw' from an unspecified source (no homepage, no checksum). Installing an unverified PyPI package is a moderate-to-high risk: packages can contain arbitrary code executed at install or runtime. The included install script suppresses pip error output (2>/dev/null) and launches the service in the background — techniques that increase risk and reduce visibility.
Credentials
The skill declares no required environment variables or credentials, which is consistent with a local router. However SKILL.md references API keys and provider endpoints (OpenAI, Anthropic, Ollama) in troubleshooting/examples without declaring or explaining how credentials are used. This mismatch means the skill will rely on existing user-configured provider credentials (in OpenClaw or environment), which the skill will have access to indirectly after onboarding.
Persistence & Privilege
The skill does not request always:true and is user-invocable. The install script starts NadirClaw as a background process and writes logs to /tmp/nadirclaw.log; it also modifies OpenClaw config via onboarding. Starting a service and editing the agent config are within the claimed scope, but running a daemon that can receive/route prompts increases the operational footprint and blast radius if the package is malicious or buggy.
What to consider before installing
Proceed with caution. The skill asks you to pip install an unverified package and then runs a background routing service that will receive and forward your prompts. Before installing: (1) verify the package source (PyPI project page, GitHub repo, maintainer), check release notes and recent activity; (2) inspect the package code or run the install in a disposable environment (container, VM) first; (3) back up your OpenClaw config and review what 'nadirclaw openclaw onboard' changes; (4) avoid routing sensitive data through it until you have audited the code or confirmed a trusted upstream release; (5) prefer installs with checksums/signatures or from a known homepage. If you share system API keys or point production traffic at the router, the potential impact is higher. Providing a homepage, repository link, or signed release would materially reduce the risk and could change this assessment.

Like a lobster shell, security has layers — review code before you run it.

latestvk97avtsq6gwf85941mgkf3dpv581yg7b

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

NadirClaw Skill

NadirClaw is an open-source LLM router that classifies prompts in ~10ms and routes simple ones to cheap/local models while keeping complex work on premium models.

Install

pip install nadirclaw

Setup

Run the interactive wizard:

nadirclaw setup

Or auto-configure for OpenClaw:

nadirclaw openclaw onboard

This writes NadirClaw as a provider in OpenClaw config with model nadirclaw/auto. No restart needed.

Start

nadirclaw serve --verbose

Runs on http://localhost:8856. Any OpenAI-compatible tool can use it by pointing to this URL.

Point tools at NadirClaw

# OpenClaw (auto)
nadirclaw openclaw onboard

# Claude Code
ANTHROPIC_BASE_URL=http://localhost:8856/v1 claude

# Any OpenAI-compatible tool
OPENAI_BASE_URL=http://localhost:8856/v1 <tool>

Routing profiles

Pass x-routing-profile header or use these models:

  • nadirclaw/auto - smart routing (default)
  • nadirclaw/eco - maximize savings
  • nadirclaw/premium - always use best model
  • nadirclaw/free - Ollama/local only
  • nadirclaw/reasoning - chain-of-thought optimized

Monitor savings

nadirclaw savings      # cost savings report
nadirclaw report       # detailed routing analytics
nadirclaw dashboard    # live terminal dashboard

Key features

  • ~10ms classification overhead
  • Session persistence (no model bouncing mid-conversation)
  • Rate limit fallback (auto-retry on 429)
  • Agentic task detection (forces premium for tool use)
  • Context-window filtering (auto-swaps for long conversations)
  • Supports: OpenAI, Anthropic, Google Gemini, Ollama, any LiteLLM provider

Troubleshooting

  • If nadirclaw serve fails, check API keys: nadirclaw setup
  • For Ollama: ensure ollama serve is running first
  • Logs: nadirclaw report --last 20 to see recent routing decisions
  • Raw debug: nadirclaw serve --verbose --log-raw

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…