Llmrouter

PassAudited by ClawScan on May 1, 2026.

Overview

This skill is a coherent LLM routing proxy, but users should notice that it requires provider credentials, installs and runs external code, can send prompts to selected LLM providers, and includes optional background-service instructions.

This appears purpose-aligned rather than malicious. Before installing, review the external GitHub repository and dependencies, configure only the LLM providers you trust, protect your API keys, keep the proxy bound to localhost unless you deliberately need broader access, and enable the macOS background service only if you want it to keep running after setup.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the skill requires trusting the external repository and its Python dependencies.

Why it was flagged

The setup relies on cloning an external repository and installing its Python dependencies. This is expected for the skill, but the runnable code and dependency versions are not included in the provided artifact set.

Skill content
git clone https://github.com/alexrudloff/llmrouter.git ... pip install -r requirements.txt
Recommendation

Review the GitHub repository and requirements before installing, use a virtual environment as documented, and prefer pinned dependencies when possible.

What this means

The router can use configured provider credentials to send LLM requests and consume account quota.

Why it was flagged

The router needs LLM provider credentials to proxy requests. This is aligned with the purpose, but those credentials can authorize API usage and incur costs.

Skill content
Anthropic API key or Claude Code OAuth token (or other provider key)
Recommendation

Use provider keys with the narrowest practical scope, monitor billing/usage, and avoid placing long-lived secrets in shared files.

What this means

Depending on configuration, user prompts may be sent to remote LLM providers rather than staying local.

Why it was flagged

The skill intentionally routes prompts through a local proxy to selected local or remote model providers. This is disclosed and purpose-aligned, but it affects where prompt content is processed.

Skill content
routes them to appropriate LLM models ... Anthropic, OpenAI, Google Gemini, Kimi/Moonshot, and Ollama
Recommendation

Configure only trusted providers, use Ollama/local routing for sensitive prompts when appropriate, and understand each provider's data handling policy.

NoteHigh Confidence
ASI10: Rogue Agents
What this means

If enabled, the router may continue running in the background and remain available to handle requests using configured credentials.

Why it was flagged

The macOS service example configures the router to start automatically and stay running. This is clearly documented as an optional service mode.

Skill content
<key>RunAtLoad</key>\n    <true/>\n    <key>KeepAlive</key>\n    <true/>
Recommendation

Only install the LaunchAgent if you want persistent operation, keep the service bound to localhost unless you intentionally need network access, and know how to unload or disable it.