Llmrouter

PassAudited by VirusTotal on May 12, 2026.

Overview

Type: OpenClaw Skill Name: llmrouter Version: 0.1.1 The skill bundle describes an LLM routing proxy with clear setup and configuration instructions. While it involves fetching external code via `git clone` and installing dependencies via `pip`, these are standard practices for software installation and do not show malicious intent within the provided files. The documented macOS LaunchAgent for persistence is a legitimate operational requirement for a server application and is not hidden or designed for unauthorized access. There is no evidence of data exfiltration, malicious execution, or prompt injection against the agent with harmful objectives.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the skill requires trusting the external repository and its Python dependencies.

Why it was flagged

The setup relies on cloning an external repository and installing its Python dependencies. This is expected for the skill, but the runnable code and dependency versions are not included in the provided artifact set.

Skill content
git clone https://github.com/alexrudloff/llmrouter.git ... pip install -r requirements.txt
Recommendation

Review the GitHub repository and requirements before installing, use a virtual environment as documented, and prefer pinned dependencies when possible.

What this means

The router can use configured provider credentials to send LLM requests and consume account quota.

Why it was flagged

The router needs LLM provider credentials to proxy requests. This is aligned with the purpose, but those credentials can authorize API usage and incur costs.

Skill content
Anthropic API key or Claude Code OAuth token (or other provider key)
Recommendation

Use provider keys with the narrowest practical scope, monitor billing/usage, and avoid placing long-lived secrets in shared files.

What this means

Depending on configuration, user prompts may be sent to remote LLM providers rather than staying local.

Why it was flagged

The skill intentionally routes prompts through a local proxy to selected local or remote model providers. This is disclosed and purpose-aligned, but it affects where prompt content is processed.

Skill content
routes them to appropriate LLM models ... Anthropic, OpenAI, Google Gemini, Kimi/Moonshot, and Ollama
Recommendation

Configure only trusted providers, use Ollama/local routing for sensitive prompts when appropriate, and understand each provider's data handling policy.

NoteHigh Confidence
ASI10: Rogue Agents
What this means

If enabled, the router may continue running in the background and remain available to handle requests using configured credentials.

Why it was flagged

The macOS service example configures the router to start automatically and stay running. This is clearly documented as an optional service mode.

Skill content
<key>RunAtLoad</key>\n    <true/>\n    <key>KeepAlive</key>\n    <true/>
Recommendation

Only install the LaunchAgent if you want persistent operation, keep the service bound to localhost unless you intentionally need network access, and know how to unload or disable it.