Back to skill
v0.1.2

A.I. Smart Router

ReviewClawScan verdict for this skill. Analyzed May 1, 2026, 5:20 AM.

Analysis

The skill is coherent for a model router, but it can silently forward prompts to other AI providers and Telegram approval flows while using provider credentials and persistent routing logs, so it should be reviewed before installation.

GuidanceBefore installing, decide which AI providers may receive your prompts, configure only those keys, enable routing visibility at first, and avoid relying on automatic redaction for secrets. If using fallback or HITL features, require opt-in provider changes and confirm what is sent to Telegram or other external channels.

Findings (7)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Tool Misuse and Exploitation
SeverityLowConfidenceHighStatusNote
executor.py
delegates tasks to different models via OpenClaw's sessions_spawn mechanism

The executor prepares sub-agent delegation through sessions_spawn. This is central to the router's purpose, but it is still broad autonomous tool use.

User impactThe skill may spawn provider-specific model agents to handle user tasks, which can affect cost, privacy, and which tools or providers process the task.
RecommendationConfirm spawned agents are limited to model answering, require approval for non-routing actions, and keep routing visibility enabled during initial use.
Agentic Supply Chain Vulnerabilities
SeverityLowConfidenceHighStatusNote
README.md
git clone https://github.com/c0nSpIc0uS7uRk3r/smart-router.git

The README offers an unpinned remote clone path, while the registry source/homepage are not established in the supplied metadata.

User impactFollowing the clone command could install whatever code is current on that repository rather than the reviewed registry artifact.
RecommendationInstall the reviewed version when possible, or pin any GitHub install to a trusted commit/tag before use.
Agent Goal Hijack
SeverityInfoConfidenceMediumStatusNote
references/security.md
Ignore previous instructions

A prompt-injection phrase appears in a security reference file. In this context it may be an example pattern, but such text should not be treated as an instruction by the agent.

User impactIf reference examples are ever inserted into active prompts without clear quoting, they could confuse downstream agents.
RecommendationKeep prompt-injection examples isolated as quoted/test data and ensure runtime prompts explicitly treat them as untrusted examples.
Permission boundary

Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.

Identity and Privilege Abuse
SeverityLowConfidenceHighStatusNote
README.md
ANTHROPIC_API_KEY ... OPENAI_API_KEY ... GOOGLE_API_KEY ... XAI_API_KEY ... OpenClaw Auth Profile

The router expects provider API keys or OpenClaw auth profiles. This is purpose-aligned for multi-provider routing, but it grants account and billing authority.

User impactInstalling and configuring the skill can let it make calls using your AI provider accounts.
RecommendationUse separate, least-privilege provider keys where possible, set spending limits, and only configure providers you are comfortable sending prompts to.
Sensitive data protection

Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.

Insecure Inter-Agent Communication
SeverityMediumConfidenceHighStatusConcern
README.md
except ContextOverflow: response = await call_model(messages, "google/gemini-2.5-pro")

The documented overflow path silently retries the same message set with Google Gemini, showing that prompts or conversation context may be resent to a different provider automatically.

User impactA private prompt intended for one model/provider may be sent to another provider during fallback or context-overflow handling.
RecommendationRequire explicit opt-in or a provider allowlist for cross-provider fallback, show visible notices when provider changes occur, and document exactly what prompt/context is sent.
Insecure Inter-Agent Communication
SeverityMediumConfidenceMediumStatusConcern
README.md
HITL Gate | Low-confidence (<75%) routing triggers Telegram notification for approval

The artifact discloses an external Telegram approval channel, but does not bound what request details are sent, who receives them, or how that channel is configured.

User impactRouting decisions or request details could be exposed to a Telegram chat outside the AI provider environment.
RecommendationMake Telegram approval opt-in, declare required Telegram configuration, limit notifications to minimal metadata, and document recipients and retention.
Memory and Context Poisoning
SeverityLowConfidenceHighStatusNote
compactor.py
ROUTER_STATE_DIR ... "~/.openclaw/router-state"; ROUTER_LOGS_DIR ... "~/.openclaw/logs"

The skill keeps persistent router state and routing logs, then compacts/archives them for later use.

User impactRouting history, circuit-breaker state, and rate-limit data may persist locally and influence future routing decisions.
RecommendationReview log retention settings, periodically clear router logs if needed, and avoid putting sensitive prompt content into routing metadata.