Add SiliconFlow Provider (98+ Models, Free Tier)

为 OpenClaw 配置硅基流动 (SiliconFlow) 作为模型源。SiliconFlow 是国内领先的 AI 模型推理平台,提供 98+ 个 chat 模型,包含多个免费模型(Qwen3-8B、DeepSeek-R1-8B 等)。使用标准 OpenAI 协议(openai-completions)。包含 provider 注册、模型定义、别名配置、fallback 链接入和验证的完整流程。当管理员说想"加硅基流动"、"配 SiliconFlow"、"接入 SF 模型"、"加 Kimi"、"加 Qwen3"、"加免费模型"、"接入 DeepSeek V3.2"时使用此 skill。

MIT-0 · Free to use, modify, and redistribute. No attribution required.
3 · 1.1k · 4 current installs · 4 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The skill's name, description, and instructions align: it guides adding SiliconFlow provider entries and model aliases into OpenClaw. Requesting an API key for SiliconFlow is appropriate for this purpose. However, the registry metadata lists no primary credential or required config paths even though the SKILL.md explicitly instructs adding an API key to ~/.openclaw/openclaw.json — a documentation/metadata mismatch.
!
Instruction Scope
SKILL.md explicitly instructs backing up and patching ~/.openclaw/openclaw.json and placing the SiliconFlow API key directly into the provider config, and includes a curl command to validate the key. Those file-write and secret-storage steps are within the functional scope (configuring a provider) but they reference a local config path and secret handling that are not declared in the skill metadata — this gap increases risk (possible accidental secret exposure) and should be highlighted to administrators.
Install Mechanism
The skill is instruction-only with no install spec and no bundled code—lowest install risk. There are no downloads or third-party packages referenced in the SKILL.md or README.
!
Credentials
Functionally the skill needs a single SiliconFlow API key (reasonable). But the manifest declares no required env vars/credentials while the instructions instruct storing an sk-xxx API key directly in openclaw.json. The absence of a declared primary credential is an inconsistency; storing API keys in a plaintext config may expose the secret to other components or people with access to that file.
Persistence & Privilege
The skill does not request always:true and does not install persistent code. It does instruct modifying the agent's OpenClaw config file (~/.openclaw/openclaw.json) which is a normal operation for configuring a provider, but administrators should be aware the agent (if invoked) can write that config and thereby persist credentials in plain config files.
What to consider before installing
This skill appears to legitimately add SiliconFlow models to OpenClaw, but pay attention to the following before installing: - Backup: Perform the recommended backup of ~/.openclaw/openclaw.json and inspect the file before and after applying changes. - Secrets: SKILL.md asks you to place your SiliconFlow API key (sk-xxx) into openclaw.json. Consider whether you want secrets stored in that file; if possible, prefer the platform's secrets store or environment variables and avoid plaintext files. - Metadata mismatch: The skill metadata declares no credentials or config paths but the instructions require both — ask the publisher to correct the manifest or document why this omission exists. - Least privilege: Create an API key with limited scope/usage (if SiliconFlow supports it) and monitor usage/billing; revoke it if you see unexpected activity. - Validation: Use the provided curl to validate the key, but review the endpoint URL before issuing requests; confirm you trust the referenced domain (api.siliconflow.cn) and the invite link. - Test safely: First test with free models and a disposable API key or in a non-production environment. If you need higher assurance, request an updated skill manifest that declares the required credential and config path, or ask the maintainer to offer an option that stores the API key in a secure secrets store instead of a plaintext config file.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk974z8hk9fh51v1t4dwrxb0gm580v4st

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

配置 SiliconFlow Provider(硅基流动模型推理平台)

SiliconFlow(硅基流动)是国内领先的 AI 模型推理平台,提供 98+ 个 chat 模型,涵盖 Qwen、DeepSeek、Kimi、GLM、MiniMax 等主流系列。

核心优势

  • 🆓 多个免费模型:Qwen3-8B、DeepSeek-R1-8B 等完全免费
  • 💰 价格极低:旗舰模型价格仅为官方的 30-50%
  • 🔌 OpenAI 兼容:标准 openai-completions 协议,即插即用
  • 📦 模型丰富:一个 API Key 访问所有模型

如果还没有 SiliconFlow 账号,请通过邀请链接注册(双方均获赠额度): 👉 https://cloud.siliconflow.cn/i/ihj5inat

项目
Provider 名称siliconflow
API 协议openai-completions
Base URLhttps://api.siliconflow.cn/v1
认证方式Bearer Token (API Key)

前置条件

项目说明
API Key控制台 创建,格式 sk-xxx
余额免费模型无需余额;付费模型需充值(新用户注册送 ¥14)

获取 API Key

  1. 注册:https://cloud.siliconflow.cn/i/ihj5inat
  2. 进入控制台 → API 密钥 → 创建
  3. 复制 sk-xxx 格式的密钥

验证 API Key

curl -s 'https://api.siliconflow.cn/v1/user/info' \
  -H 'Authorization: Bearer <YOUR_API_KEY>' | python3 -m json.tool

期望返回 "status": "normal" 和余额信息。


推荐模型

🆓 免费模型(无限使用)

模型 ID说明推荐别名
Qwen/Qwen3-8B通义千问 3 代 8B,综合能力强sf-qwen3-8b
deepseek-ai/DeepSeek-R1-0528-Qwen3-8BDeepSeek R1 推理蒸馏版sf-r1-8b
THUDM/glm-4-9b-chat智谱 GLM-4 9Bsf-glm4
Qwen/Qwen2.5-7B-InstructQwen 2.5 7Bsf-qwen25-7b
Qwen/Qwen2.5-Coder-7B-InstructQwen 2.5 编码专用sf-qwen-coder-7b

💰 性价比模型(便宜好用)

模型 ID输入/输出 (¥/M tokens)说明推荐别名
Qwen/Qwen3-30B-A3B0.7 / 2.8MoE 架构,性价比极高sf-qwen3-30b
Qwen/Qwen3-Coder-30B-A3B-Instruct0.7 / 2.8编码专用 30Bsf-coder-30b
deepseek-ai/DeepSeek-V3.22.0 / 3.0DeepSeek 最新版sf-dsv3
Pro/deepseek-ai/DeepSeek-V3.22.0 / 3.0Pro 加速版sf-dsv3-pro

🚀 旗舰模型(重要任务)

模型 ID输入/输出 (¥/M tokens)说明推荐别名
deepseek-ai/DeepSeek-R14.0 / 16.0推理模型sf-r1
Pro/moonshotai/Kimi-K2.54.0 / 21.0月之暗面最强模型sf-kimi
Qwen/Qwen3-Coder-480B-A35B-Instruct8.0 / 16.0编码旗舰 480B MoEsf-coder-480b

配置步骤

Step 1: 备份配置

cp ~/.openclaw/openclaw.json ~/.openclaw/openclaw.json.backup.$(date +%Y%m%d_%H%M%S)

Step 2: 添加 Provider

通过 gateway config.patch 添加 SiliconFlow provider。以下为推荐配置(8 个精选模型):

{
  "models": {
    "providers": {
      "siliconflow": {
        "baseUrl": "https://api.siliconflow.cn/v1",
        "apiKey": "<YOUR_API_KEY>",
        "api": "openai-completions",
        "models": [
          {
            "id": "Qwen/Qwen3-8B",
            "name": "Qwen3 8B (Free)",
            "reasoning": false,
            "input": ["text"],
            "cost": {"input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0},
            "contextWindow": 32768,
            "maxTokens": 8192
          },
          {
            "id": "deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
            "name": "DeepSeek R1 Qwen3 8B (Free)",
            "reasoning": true,
            "input": ["text"],
            "cost": {"input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0},
            "contextWindow": 32768,
            "maxTokens": 8192
          },
          {
            "id": "Qwen/Qwen3-30B-A3B",
            "name": "Qwen3 30B MoE",
            "reasoning": false,
            "input": ["text"],
            "cost": {"input": 0.7, "output": 2.8, "cacheRead": 0, "cacheWrite": 0},
            "contextWindow": 32768,
            "maxTokens": 8192
          },
          {
            "id": "Qwen/Qwen3-Coder-30B-A3B-Instruct",
            "name": "Qwen3 Coder 30B",
            "reasoning": false,
            "input": ["text"],
            "cost": {"input": 0.7, "output": 2.8, "cacheRead": 0, "cacheWrite": 0},
            "contextWindow": 32768,
            "maxTokens": 8192
          },
          {
            "id": "deepseek-ai/DeepSeek-V3.2",
            "name": "DeepSeek V3.2",
            "reasoning": false,
            "input": ["text"],
            "cost": {"input": 2.0, "output": 3.0, "cacheRead": 0, "cacheWrite": 0},
            "contextWindow": 128000,
            "maxTokens": 8192
          },
          {
            "id": "deepseek-ai/DeepSeek-R1",
            "name": "DeepSeek R1",
            "reasoning": true,
            "input": ["text"],
            "cost": {"input": 4.0, "output": 16.0, "cacheRead": 0, "cacheWrite": 0},
            "contextWindow": 128000,
            "maxTokens": 8192
          },
          {
            "id": "Pro/moonshotai/Kimi-K2.5",
            "name": "Kimi K2.5",
            "reasoning": false,
            "input": ["text"],
            "cost": {"input": 4.0, "output": 21.0, "cacheRead": 0, "cacheWrite": 0},
            "contextWindow": 128000,
            "maxTokens": 8192
          },
          {
            "id": "Qwen/Qwen3-Coder-480B-A35B-Instruct",
            "name": "Qwen3 Coder 480B",
            "reasoning": false,
            "input": ["text"],
            "cost": {"input": 8.0, "output": 16.0, "cacheRead": 0, "cacheWrite": 0},
            "contextWindow": 32768,
            "maxTokens": 8192
          }
        ]
      }
    }
  }
}

Step 3: 添加别名

在同一个 patch 中添加别名映射:

{
  "agents": {
    "defaults": {
      "models": {
        "siliconflow/Qwen/Qwen3-8B": {"alias": "sf-qwen3-8b"},
        "siliconflow/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B": {"alias": "sf-r1-8b"},
        "siliconflow/Qwen/Qwen3-30B-A3B": {"alias": "sf-qwen3-30b"},
        "siliconflow/Qwen/Qwen3-Coder-30B-A3B-Instruct": {"alias": "sf-coder-30b"},
        "siliconflow/deepseek-ai/DeepSeek-V3.2": {"alias": "sf-dsv3"},
        "siliconflow/deepseek-ai/DeepSeek-R1": {"alias": "sf-r1"},
        "siliconflow/Pro/moonshotai/Kimi-K2.5": {"alias": "sf-kimi"},
        "siliconflow/Qwen/Qwen3-Coder-480B-A35B-Instruct": {"alias": "sf-coder-480b"}
      }
    }
  }
}

⚠️ agents.defaults.models.<id> 只允许 alias 字段! 其他字段会导致 Gateway 崩溃。

Step 4: 接入 Fallback 链

将免费模型加入 fallback 链作为兜底:

{
  "agents": {
    "defaults": {
      "model": {
        "fallbacks": [
          "...(现有 fallbacks)...",
          "siliconflow/Qwen/Qwen3-8B",
          "siliconflow/Qwen/Qwen3-30B-A3B"
        ]
      }
    }
  }
}

推荐 fallback 策略:优先放免费模型 (Qwen3-8B),然后放便宜模型 (Qwen3-30B)。

Step 5: 验证

# 1. 配置校验
openclaw doctor

# 2. 重启生效
openclaw gateway restart

# 3. 确认状态
openclaw gateway status

# 4. 测试模型切换
# 在聊天中输入: /model sf-kimi

实用 API

查询余额

curl -s 'https://api.siliconflow.cn/v1/user/info' \
  -H 'Authorization: Bearer <API_KEY>' | python3 -c "
import json,sys; d=json.load(sys.stdin)['data']
print(f'充值余额: ¥{d[\"chargeBalance\"]}')
print(f'赠送余额: ¥{d[\"balance\"]}')
print(f'总余额: ¥{d[\"totalBalance\"]}')
"

查看可用模型

# 所有 chat 模型
curl -s 'https://api.siliconflow.cn/v1/models?sub_type=chat' \
  -H 'Authorization: Bearer <API_KEY>' | python3 -c "
import json,sys
models = json.load(sys.stdin)['data']
print(f'共 {len(models)} 个 chat 模型')
for m in sorted(models, key=lambda x: x['id']):
    print(f'  {m[\"id\"]}')
"

测试模型

curl -s 'https://api.siliconflow.cn/v1/chat/completions' \
  -H 'Authorization: Bearer <API_KEY>' \
  -H 'Content-Type: application/json' \
  -d '{
    "model": "Qwen/Qwen3-8B",
    "messages": [{"role":"user","content":"说OK"}],
    "max_tokens": 5
  }'

添加更多模型

SiliconFlow 有 98+ 个 chat 模型。如需添加更多,先用模型列表 API 查询可用模型,然后按 Step 2 的格式添加到 provider 的 models 数组中。

热门模型速查

模型输入/输出 (¥/M tokens)特点
zai-org/GLM-4.63.5 / 14.0智谱最新旗舰
Pro/deepseek-ai/DeepSeek-R14.0 / 16.0Pro 加速推理
moonshotai/Kimi-K2-Thinking4.0 / 16.0Kimi 思考模型
Qwen/Qwen3-235B-A22B-Instruct-25072.5 / 10.0Qwen3 指令模型
baidu/ERNIE-4.5-300B-A47B2.0 / 8.0百度文心
stepfun-ai/step34.0 / 10.0阶跃星辰 Step3

注意事项

  1. 免费模型有 QPS 限制:免费模型的并发数可能受限,适合 fallback 和低频任务
  2. Pro 版本 vs 普通版本Pro/ 前缀的模型使用专用推理集群,速度更快但价格略高
  3. 模型 ID 区分大小写:必须严格匹配,如 Qwen/Qwen3-8B 不能写成 qwen/qwen3-8b
  4. cost 字段单位:¥/百万 tokens (1M tokens)

注册链接https://cloud.siliconflow.cn/i/ihj5inat (邀请注册双方均获赠额度)

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…