Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Chinese LLM Router

Route OpenClaw chats to top Chinese LLMs with smart model selection, auto-fallback, cost tracking, and unified OpenAI-compatible API access.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 652 · 1 current installs · 1 all-time installs
byxund@Xdd-xund
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The code (router.js + setup.js) implements a model router for the listed Chinese providers and requires provider API keys — this matches the skill's description. Minor mismatch: SKILL.md claims it 'reads API keys from environment or from ~/.chinese-llm-router/config.json', but the shipped code only reads/writes the config file (no environment-variable fallback is implemented).
Instruction Scope
Runtime instructions are limited to running the interactive setup and using the router CLI or exported functions. The scripts only read/write ~/.chinese-llm-router/config.json and make HTTP(S) calls to provider baseUrls. SKILL.md mentions features like 'cost tracking' and 'context-aware' preferences which are advertised but not implemented in the provided code (no persistent per-conversation preference store or token accounting found).
Install Mechanism
There is no install spec; this is instruction-plus-scripts only. Nothing is downloaded from external URLs during install. Risk is low because no archive/executable is pulled from unknown hosts.
Credentials
The skill asks the user to provide API keys for multiple Chinese LLM providers — this is proportionate to a router that can use many providers. Registry metadata declared no required env vars, but SKILL.md suggests env-var support (which the code does not implement). API keys are stored in a config file under the user's home directory; that is expected but sensitive.
Persistence & Privilege
The skill writes a config file to ~/.chinese-llm-router/config.json and creates the directory if needed. It does not request elevated or system-wide privileges, does not set always:true, and does not modify other skills or agent-wide settings.
Assessment
Key points before installing: - Function/purpose: The code is a straightforward router that reads API keys from ~/.chinese-llm-router/config.json (setup.js) and sends chat requests to provider baseUrls; this matches the skill description. Expect to provide API keys for any provider you want to use. - SKILL.md inaccuracies: The README says it reads keys from environment variables OR the config file, but the provided code only reads the config file. Also several advertised features (cost tracking, persistent per-conversation model preferences) are not present in the provided scripts — they may be planned features, not implemented. - Sensitive data: The setup script saves your provider API keys to ~/.chinese-llm-router/config.json. Protect that file (set permissions, e.g., chmod 600) and only enter keys for providers you trust. Any prompt you send through this router will be transmitted to the configured provider(s) and may be logged or retained by them. - Bug to be aware of: router.js constructs the request URL in a way that duplicates the path (it appends '/chat/completions' twice), which will likely break calls to providers as-is. If you encounter failures, inspect chatCompletion() and adjust the path construction (use the URL's pathname without re-appending '/chat/completions'). - Operational advice: Review the config file after running setup to confirm keys/baseUrls are correct. If you prefer not to type keys interactively, you can create the config.json yourself with correct structure. Test providers with the CLI 'node scripts/router.js test <model>' before relying on the skill. Overall recommendation: The skill appears coherent and not malicious, but review and harden the local config file and be aware of the documentation/code mismatches and the URL path bug before use.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
chinesevk97d9hj4sdg8nrzg5gfxe2j7dn8194hbdeepseekvk97d9hj4sdg8nrzg5gfxe2j7dn8194hbdoubaovk97d9hj4sdg8nrzg5gfxe2j7dn8194hbglmvk97d9hj4sdg8nrzg5gfxe2j7dn8194hbkimivk97d9hj4sdg8nrzg5gfxe2j7dn8194hblatestvk97d9hj4sdg8nrzg5gfxe2j7dn8194hbllmvk97d9hj4sdg8nrzg5gfxe2j7dn8194hbminimaxvk97d9hj4sdg8nrzg5gfxe2j7dn8194hbqwenvk97d9hj4sdg8nrzg5gfxe2j7dn8194hbroutervk97d9hj4sdg8nrzg5gfxe2j7dn8194hbstepvk97d9hj4sdg8nrzg5gfxe2j7dn8194hb

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Chinese LLM Router

Route your OpenClaw conversations to the best Chinese AI models — no config headaches, just pick and chat.

What It Does

Gives your OpenClaw instant access to all major Chinese LLMs through a single unified interface:

  • DeepSeek (V3.2 / R1) — Best open-source reasoning, dirt cheap
  • Qwen (Qwen3-Max / Qwen3-Max-Thinking / Qwen3-Coder-Plus) — Alibaba's flagship, strong all-rounder
  • GLM (GLM-5 / GLM-4.7) — Zhipu AI, top-tier coding & agent tasks
  • Kimi (K2.5 / K2.5-Thinking) — Moonshot AI, great for long context & vision
  • Doubao Seed 2.0 (Pro / Lite / Mini) — ByteDance, fast & cheap
  • MiniMax (M2.5) — Lightweight powerhouse, runs locally too
  • Step (3.5 Flash) — StepFun, blazing fast inference
  • Baichuan (Baichuan4-Turbo) — Strong Chinese language understanding
  • Spark (v4.0 Ultra) — iFlytek, speech & Chinese NLP specialist
  • Hunyuan (Turbo-S) — Tencent, WeChat ecosystem integration

Quick Start

Tell your OpenClaw:

Use DeepSeek V3.2 for this conversation

Or ask it to pick the best model:

Which Chinese model is best for coding? Switch to it.

Commands

CommandWhat it does
list modelsShow all available Chinese LLMs with status
use <model>Switch to a specific model
compare <models>Compare capabilities & pricing
recommend <task>Get model recommendation for a task type
test <model>Send a test prompt to verify connectivity
statusCheck which models are currently accessible

Model Selection Guide

TaskRecommended ModelWhy
General chatQwen3-MaxBest all-rounder, strong Chinese
CodingGLM-5 / Kimi K2.5Top coding benchmarks
Math & reasoningDeepSeek R1Purpose-built for reasoning
Long documentsKimi K2.5 (128K) / DeepSeek V3.2 (1M)Massive context windows
Fast & cheapStep 3.5 Flash / Doubao Seed 2.0 MiniSub-second latency
Creative writingQwen3-Max / Doubao Seed 2.0 ProRich Chinese expression
Agent tasksGLM-5 / Qwen3-MaxBest tool-use support

Configuration

The skill reads API keys from environment or from ~/.chinese-llm-router/config.json:

{
  "providers": {
    "deepseek": {
      "apiKey": "sk-xxx",
      "baseUrl": "https://api.deepseek.com/v1",
      "models": ["deepseek-chat", "deepseek-reasoner"]
    },
    "qwen": {
      "apiKey": "sk-xxx",
      "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
      "models": ["qwen3-max", "qwen3-max-thinking", "qwen3-coder-plus"]
    },
    "glm": {
      "apiKey": "xxx.xxx",
      "baseUrl": "https://open.bigmodel.cn/api/paas/v4",
      "models": ["glm-5", "glm-4-plus"]
    },
    "kimi": {
      "apiKey": "sk-xxx",
      "baseUrl": "https://api.moonshot.cn/v1",
      "models": ["kimi-k2.5", "kimi-k2.5-thinking"]
    },
    "doubao": {
      "apiKey": "xxx",
      "baseUrl": "https://ark.cn-beijing.volces.com/api/v3",
      "models": ["doubao-seed-2.0-pro", "doubao-seed-2.0-lite", "doubao-seed-2.0-mini"]
    },
    "minimax": {
      "apiKey": "xxx",
      "baseUrl": "https://api.minimax.chat/v1",
      "models": ["minimax-m2.5"]
    },
    "step": {
      "apiKey": "xxx",
      "baseUrl": "https://api.stepfun.com/v1",
      "models": ["step-3.5-flash"]
    },
    "baichuan": {
      "apiKey": "xxx",
      "baseUrl": "https://api.baichuan-ai.com/v1",
      "models": ["baichuan4-turbo"]
    },
    "spark": {
      "apiKey": "xxx",
      "baseUrl": "https://spark-api-open.xf-yun.com/v1",
      "models": ["spark-v4.0-ultra"]
    },
    "hunyuan": {
      "apiKey": "xxx",
      "baseUrl": "https://api.hunyuan.cloud.tencent.com/v1",
      "models": ["hunyuan-turbo-s"]
    }
  },
  "default": "qwen3-max",
  "fallback": ["deepseek-chat", "doubao-seed-2.0-pro"]
}

Setup

  1. Get API keys from the providers you want (most offer free tiers):

  2. Run the setup script:

    node scripts/setup.js
    
  3. Done! Your OpenClaw can now use any configured model.

Pricing Reference (Feb 2026)

ModelInput (¥/M tokens)Output (¥/M tokens)Notes
DeepSeek V3.2¥0.5 (cache ¥0.1)¥2.0Cheapest flagship
Qwen3-Max¥2.0¥6.0Free tier available
GLM-5¥5.0¥5.0Just launched, may change
Kimi K2.5¥2.0¥6.0Open source, self-host free
Doubao Seed 2.0 Pro¥0.8¥2.0ByteDance subsidy
Doubao Seed 2.0 Mini¥0.15¥0.3Ultra cheap
MiniMax M2.5¥1.0¥3.0Can run locally
Step 3.5 Flash¥0.7¥1.4Fastest inference

Prices as of Feb 2026. All providers offer free tiers or credits for new users.

All APIs are OpenAI-Compatible

Every provider listed uses the OpenAI chat/completions format. No special SDKs needed — just change baseUrl and apiKey.

Features

  • Auto-fallback: If one provider is down, automatically try the next
  • Cost tracking: See per-model token usage and estimated cost
  • Smart routing: Describe your task, get the best model recommendation
  • Batch compare: Send the same prompt to multiple models, compare outputs
  • Context-aware: Remembers your model preference per conversation topic

Links

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…