Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Lynkr AI Routing Proxy

v0.6.0

Intelligent LLM routing proxy with complexity-based tier routing, agentic workflow detection, and multi-provider failover. Drop-in replacement for direct pro...

0· 97·0 current·0 all-time
byVishal Veera Reddy@vishalveerareddy123

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for vishalveerareddy123/lynkr.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Lynkr AI Routing Proxy" (vishalveerareddy123/lynkr) from ClawHub.
Skill page: https://clawhub.ai/vishalveerareddy123/lynkr
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install lynkr

ClawHub CLI

Package manager switcher

npx clawhub@latest install lynkr
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name and description (an LLM routing proxy) match the SKILL.md content (tier routing, complexity scoring, multi-provider). However the registry metadata at the top of the evaluation says 'Required binaries: none' while the SKILL.md itself declares node >=18 and an npm package (lynkr) — an inconsistency between manifest metadata and the runtime instructions.
!
Instruction Scope
SKILL.md instructs installing a global npm package (npm install -g lynkr) and running a local proxy, plus enabling features like server-side tool execution and OPENCLAW_MODE which rewrites model names in responses. Those instructions imply executing third-party code and possibly server-side tool invocation (command execution), but the skill does not declare how or where provider credentials are configured. The doc also suggests adding an 'api_key' value of 'any-value' for openclaw.json, which is ambiguous and could enable an unauthenticated or misconfigured proxy if followed literally.
Install Mechanism
There is no formal install spec in the registry entry (instruction-only), but SKILL.md tells operators to use npm to install a published 'lynkr' package. Installing a public npm package is a standard workflow but carries inherent risk (you are executing third-party code); the instruction does not point to a verified release URL or checksum. This is moderate risk but expected for an npm-distributed tool.
!
Credentials
The skill declares no required environment variables in the metadata, yet the SKILL.md shows multiple env vars (MODEL_PROVIDER, TIER_* entries, OPENCLAW_MODE) and implies the proxy will use provider-specific credentials to call many cloud providers. The manifest omits any requirement for provider API keys or guidance on securely supplying them; the provided openclaw.json snippet uses 'api_key':'any-value', which is ambiguous and could encourage insecure deployment. Requesting no secrets in metadata while supporting many providers is inconsistent and should be clarified.
Persistence & Privilege
The skill is not marked always:true and does not request special persistent privileges in the registry metadata. It is user-invocable and can be invoked autonomously by the agent (default), which is expected for skills.
What to consider before installing
This skill appears to be a legitimate LLM routing proxy, but several things don't add up. Before installing or running it: 1) Inspect the actual npm package and GitHub repository (verify maintainer, recent commits, and code) rather than relying only on SKILL.md. 2) Confirm how provider API keys are configured and stored — do not run an unauthenticated public proxy. 3) Be cautious about installing a global npm package and running it as a service — review the package contents and any startup scripts. 4) If you enable OPENCLAW_MODE, understand it will expose the actual provider/model names in responses (possible metadata leakage). 5) Because the SKILL.md mentions server-side tool execution, limit network exposure and run it in an isolated environment until you’ve audited its behavior. Providing the repository URL, package tarball, or the lynkr source code would materially increase confidence and allow a more precise assessment.

Like a lobster shell, security has layers — review code before you run it.

latestvk973vxr5c5agpat4xcr7s84xvh83d1wk
97downloads
0stars
1versions
Updated 1mo ago
v0.6.0
MIT-0

Lynkr - Intelligent LLM Routing Proxy

Lynkr routes AI coding requests to the best available model based on task complexity, cost, and provider health. It supports 12+ providers and works as an OpenAI-compatible proxy.

Quick Start

npm install -g lynkr
lynkr --port 8081

Then point your AI coding tool at http://localhost:8081/v1.

How It Works

  1. Complexity Analysis - Scores each request 0-100 based on token count, tool usage, code patterns, and domain keywords
  2. Tier Routing - Maps score to a tier (SIMPLE/MEDIUM/COMPLEX/REASONING), each configured with a specific provider:model
  3. Agentic Detection - Detects multi-step workflows (tool loops, autonomous agents) and upgrades to higher tiers
  4. Cost Optimization - Picks the cheapest provider that can handle the tier
  5. Circuit Breaker + Failover - Automatic failover when a provider is down

Configuration for OpenClaw

Set tier routing in your environment:

MODEL_PROVIDER=ollama
TIER_SIMPLE=ollama:qwen2.5-coder:7b
TIER_MEDIUM=openrouter:anthropic/claude-sonnet-4-20250514
TIER_COMPLEX=bedrock:anthropic.claude-sonnet-4-20250514-v1:0
TIER_REASONING=bedrock:anthropic.claude-sonnet-4-20250514-v1:0

OpenClaw Mode

When running under OpenClaw, enable model name rewriting so the actual provider and model appear in responses:

OPENCLAW_MODE=true

This replaces the generic model: "auto" in responses with the actual provider/model that handled the request (e.g., ollama/qwen2.5-coder:7b or bedrock/claude-sonnet-4).

Provider Registration

Add to your openclaw.json:

{
  "models": {
    "providers": [
      {
        "name": "lynkr",
        "type": "openai-compatible",
        "base_url": "http://localhost:8081/v1",
        "api_key": "any-value",
        "models": ["auto"]
      }
    ]
  },
  "agents": {
    "defaults": {
      "models": {
        "primary": "lynkr/auto",
        "fallback": "lynkr/auto"
      }
    }
  }
}

Features

  • 12+ providers: Ollama, OpenAI, Anthropic (Azure/Bedrock/Direct), OpenRouter, Vertex, Moonshot, Z.AI, LM Studio, llama.cpp
  • Smart routing: Heuristic + optional BERT-based complexity classification
  • Tool support: Server-side tool execution with IDE-aware tool mapping (Cursor, Cline, Continue, Codex)
  • Session management: Persistent sessions with cross-request deduplication
  • Observability: Prometheus metrics, circuit breaker status, routing decision headers (X-Lynkr-*)
  • Agent-aware: X-Agent-Role header for multi-agent framework routing hints
  • Lazy tool loading: On-demand tool registration for fast startup
  • History compression: Automatic conversation trimming for long sessions

Response Headers

Every response includes routing metadata:

HeaderDescription
X-Lynkr-ProviderProvider that handled the request
X-Lynkr-ModelModel used
X-Lynkr-TierComplexity tier (SIMPLE/MEDIUM/COMPLEX/REASONING)
X-Lynkr-Complexity-ScoreNumeric score 0-100
X-Lynkr-Routing-MethodHow the route was decided
X-Lynkr-Cost-OptimizedWhether cost optimization changed the provider

Comments

Loading comments...