Llmrouter
PassAudited by VirusTotal on May 12, 2026.
Overview
Type: OpenClaw Skill Name: llmrouter Version: 0.1.1 The skill bundle describes an LLM routing proxy with clear setup and configuration instructions. While it involves fetching external code via `git clone` and installing dependencies via `pip`, these are standard practices for software installation and do not show malicious intent within the provided files. The documented macOS LaunchAgent for persistence is a legitimate operational requirement for a server application and is not hidden or designed for unauthorized access. There is no evidence of data exfiltration, malicious execution, or prompt injection against the agent with harmful objectives.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installing the skill requires trusting the external repository and its Python dependencies.
The setup relies on cloning an external repository and installing its Python dependencies. This is expected for the skill, but the runnable code and dependency versions are not included in the provided artifact set.
git clone https://github.com/alexrudloff/llmrouter.git ... pip install -r requirements.txt
Review the GitHub repository and requirements before installing, use a virtual environment as documented, and prefer pinned dependencies when possible.
The router can use configured provider credentials to send LLM requests and consume account quota.
The router needs LLM provider credentials to proxy requests. This is aligned with the purpose, but those credentials can authorize API usage and incur costs.
Anthropic API key or Claude Code OAuth token (or other provider key)
Use provider keys with the narrowest practical scope, monitor billing/usage, and avoid placing long-lived secrets in shared files.
Depending on configuration, user prompts may be sent to remote LLM providers rather than staying local.
The skill intentionally routes prompts through a local proxy to selected local or remote model providers. This is disclosed and purpose-aligned, but it affects where prompt content is processed.
routes them to appropriate LLM models ... Anthropic, OpenAI, Google Gemini, Kimi/Moonshot, and Ollama
Configure only trusted providers, use Ollama/local routing for sensitive prompts when appropriate, and understand each provider's data handling policy.
If enabled, the router may continue running in the background and remain available to handle requests using configured credentials.
The macOS service example configures the router to start automatically and stay running. This is clearly documented as an optional service mode.
<key>RunAtLoad</key>\n <true/>\n <key>KeepAlive</key>\n <true/>
Only install the LaunchAgent if you want persistent operation, keep the service bound to localhost unless you intentionally need network access, and know how to unload or disable it.
