Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

AI守门人

LLM API 代理服务管理工具。支持多 Provider 转发(百炼/OpenRouter/NVIDIA)、内容安全审计、健康监控。使用场景:(1)启动/停止/重启代理服务 (2)查看代理状态和统计 (3)配置内容过滤规则。

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 49 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code and SKILL.md implement a local LLM API proxy (start/stop/status/health/stats/logs) with multi-provider forwarding and layered content filtering, which matches the description. However the package does not declare runtime dependencies or required env vars even though the scripts and docs expect tools (curl, python3, lsof, tail) and optional env vars (LLM_PROXY_PORT, RULES_FILE), so the metadata underreports what it needs.
!
Instruction Scope
SKILL.md triggers execute the control script and curl to local endpoints — expected for a manager tool. The control script runs privileged actions like kill -9 on processes found by port (uses lsof), writes PID and log files under /tmp and ~/.openclaw, and tails logs. The proxy accepts incoming requests, forwards Authorization/Api-Key headers to upstream providers, and writes JSONL logs. README claims logs are "已脱敏" (desensitized) but I could not find explicit sanitization in the provided code snippet — logs may therefore contain secrets. The proxy will necessarily see API keys and request content; the instructions do not warn about this nor declare where sensitive data may be stored.
Install Mechanism
There is no external install mechanism (no downloads), and the skill includes the Python and shell scripts in the bundle. This keeps risk moderate: code will run locally but nothing is fetched from untrusted URLs. Still, executing bundled scripts runs arbitrary code on the host and should be reviewed.
!
Credentials
The skill declares no required environment variables, but README and code reference LLM_PROXY_PORT and RULES_FILE and the docs recommend exporting provider API keys (BAILIAN_API_KEY, OPENROUTER_API_KEY, NVIDIA_API_KEY). The proxy will receive/forward Authorization headers and may log request bodies; that is proportional for a proxy but sensitive. There is also an inconsistency between the content-filter rules file and runtime behavior: the rules file's response_actions sets critical.block=false, but the python code treats 'critical' alerts as blocking (403). This mismatch could lead to surprising behavior.
Persistence & Privilege
The skill is not marked 'always' and is user-invocable only. It writes a PID to /tmp and logs to ~/.openclaw/logs and does not appear to modify other skills or global agent config. It runs a local server bound to 127.0.0.1 by default (not exposed externally), which is the expected level of persistence for a local proxy.
What to consider before installing
Before installing or running this skill: - Manually inspect the included scripts (scripts/llm-proxy.py and scripts/llm-proxy-ctl.sh). Look for where request bodies, headers, or upstream responses are logged; confirm that secrets (Authorization, API keys, tokens, full request bodies) are not written to logs in plaintext or adjust the code to redact them. - Be aware the proxy will see and forward Authorization/Api-Key headers from clients. Only run it if you trust the environment and the code. - Ensure the host has the required tools (python3, curl, lsof, tail); the skill metadata does not declare these dependencies. - Note the rules file and code disagree about whether 'critical' alerts are blocked; test blocking behavior and correct the rules/response_actions if needed. - Run initially in an isolated environment (local-only, 127.0.0.1) and review ~/.openclaw/logs/llm-proxy/* to confirm no sensitive data is retained. - If you plan to route production API traffic through it, add explicit access controls and consider encrypting or avoiding logging sensitive headers/bodies, or host the proxy in a trusted network segment. - If you are not comfortable auditing the code, do not install or run it with real API keys or production traffic.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.3
Download zip
latestvk97fnv2ysth2594dwj1r0prhtd837647

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

LLM Proxy 管理工具

本地 LLM API 代理服务,提供多 Provider 转发、内容安全审计等功能。

快捷命令

命令说明
llm-proxy start启动代理服务
llm-proxy stop停止代理服务
llm-proxy restart重启代理服务
llm-proxy status查看代理状态
llm-proxy health健康检查(JSON 格式)
llm-proxy stats查看统计数据
llm-proxy logs查看实时日志

功能特性

  • 🔄 多 Provider 转发 - 百炼、OpenRouter、NVIDIA NIM
  • 🔒 内容安全审计 - 两层审核机制(恶意指令 + 敏感内容)
  • 📊 请求统计 - 实时统计请求数、错误率、告警数
  • 📝 日志记录 - 所有请求 JSONL 格式存档(已脱敏)

Provider 配置

前缀目标 API
/bailian阿里云百炼 https://coding.dashscope.aliyuncs.com/v1
/openrouterOpenRouter https://openrouter.ai/api/v1
/nvdNVIDIA NIM https://integrate.api.nvidia.com/v1

内容安全审计

两层审核机制

第一层:恶意指令检测(11 条规则)

  • CMD-001: 危险系统命令
  • CMD-002: 提权操作
  • SQL-001: SQL 注入/删除
  • NET-001: 数据外泄
  • NET-002: 网络扫描/攻击工具
  • BACKDOOR-001: 后门/反弹 shell
  • EXEC-001~005: 执行意图诱导、代码注入、软件诱导、自我删除、权限提升

第二层:敏感内容检测(6 条规则)

  • PII-001/002: 个人身份信息、银行卡
  • CRED-001: 凭证/密钥泄露
  • SECRET-001: 敏感配置泄露
  • LEGAL-001: 违法内容
  • INTERNAL-001: 内部信息泄露

阻断逻辑

  • critical 告警:阻断响应,返回 403 错误
  • high 告警:放行但记录日志
  • 白名单内容:跳过检测

核心文件

文件说明
scripts/llm-proxy.py代理主程序(v3.1)
scripts/llm-proxy-ctl.sh启动控制脚本
scripts/content-filter-rules.json内容过滤规则(17 条)

默认端口: 18888

Files

8 total
Select a file
Select a file to preview.

Comments

Loading comments…