Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Requirement Checker

v2.4.0

需求文档规范自动检查技能(AI 驱动 + 智能引导 + 温柔话术)。使用 LLM 智能检查需求文档是否符合规范,生成具体问题说明、原文引用、针对性建议和 GWT 验收标准。 首次使用会智能引导你设置目录和 API 配置,配置一次后无需重复设置。 支持单个文件检查和批量检查,汇总报告可选。 温柔话术:从"挑毛病"变...

0· 10·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill is a document/PRD checker that legitimately needs an LLM provider key and the ability to read input/output directories. However the skill metadata declared no required environment variables or primary credential while the code reads/writes API credentials (config.json, ~/.openclaw/openclaw.json, OPENAI_API_KEY, DASHSCOPE_API_KEY). That mismatch (no declared secrets but code using secrets) is incoherent.
Instruction Scope
SKILL.md instructs the agent to spawn a subagent to run scripts that scan user-specified input directories and send document content to LLM endpoints. That is consistent with the purpose, but the runtime instructions and scripts will read local files, save config.json with API credentials, and may send portions of document contents to external model providers — users should be aware this transmits possibly sensitive doc contents to the configured provider.
Install Mechanism
There is no external download/install spec — the skill is instruction-plus-local scripts. No remote archive or installer is fetched at install time, which lowers install-time risk. However scripts (e.g., setup_api.sh) can modify user shell profiles if run.
!
Credentials
The code expects and persists LLM API keys (reads env vars, OpenClaw config, and a local config.json). Yet the declared requirements list no env vars or primary credential. Worse: config.json included in the bundle contains a hard-coded-looking API key and base_url — storing secrets in the skill files and offering scripts that write API keys into user shell profiles is disproportionate and dangerous if not disclosed.
!
Persistence & Privilege
The skill persists configuration (writes/updates config.json) and the provided setup_api.sh can append API key exports to the user's shell profile. It does not request always:true, but persisting keys in repo files or user profiles increases lasting access risk and should be handled with care.
Scan Findings in Context
[hardcoded_api_key_in_config] unexpected: config.json contains an 'api.api_key' value (sk-sp-d808a63f...) embedded in the repository. While the skill needs an API key to call LLMs, shipping a key in the code is a secret-leak and should not be present in published skill bundles.
[direct_api_calls_with_key] expected: Multiple scripts (generate_gwt_llm.py, generate_gwt_with_llm.py, others) make HTTP/LLM API calls using an API key or OpenClaw provider config — this is expected for an LLM-driven checker but confirms that document content will be sent to external endpoints.
[writes_user_shell_profile] unexpected: setup_api.sh offers an option that appends exports (API key/base_url/model) into the user's shell profile (~/.bashrc, ~/.zshrc). Writing credentials into shell profiles is intrusive and increases the attack surface; it should be a documented, opt-in step.
[sessions_spawn_subagent_usage] expected: SKILL.md and several scripts instruct using OpenClaw sessions_spawn with runtime='subagent' to run the check — consistent with the stated workflow (isolation via subagent).
[hardcoded_user_paths_and_invalid_literals] unexpected: Code contains hard-coded absolute paths and unusual dictionary literal usages (e.g., unquoted runtime: "subagent") which look inconsistent or may be buggy; this suggests the code wasn't fully adapted/tested for the runtime language/environment. Not a direct security finding, but increases risk of unexpected behavior.
What to consider before installing
Before you install or run this skill: - Treat document contents as potentially sensitive: the skill will send parts of files to external LLM endpoints (configured base_url/provider). Only use with documents you're comfortable sending to the provider. - Inspect and remove any embedded secrets: config.json in the package includes an API key-like string — do NOT trust bundled keys. Replace with your own key or remove the api.api_key field. - Prefer providing credentials via your OpenClaw config or ephemeral environment variables rather than running setup_api.sh that appends exports to your shell profile. If you must run setup_api.sh, read it first and do not allow it to write your API key into persistent shell files unless you accept that. - Consider running the skill in an isolated environment (container or dedicated account) and review scripts that will read arbitrary input directories. - Note the code has some hard-coded paths and syntactic oddities; review the scripts (especially sessions_spawn call sites) and test in a safe environment. If you want to proceed: remove the hard-coded key from config.json, supply a least-privilege API key you control, and test with non-sensitive documents. If you want more help, provide the specific files you plan to check and how you plan to configure the provider, and I can point out exact lines to change/remove.

Like a lobster shell, security has layers — review code before you run it.

aivk9706hak5k1z2d4wvfbrgxys998456b6checkervk9706hak5k1z2d4wvfbrgxys998456b6latestvk9706hak5k1z2d4wvfbrgxys998456b6prdvk9706hak5k1z2d4wvfbrgxys998456b6requirementvk9706hak5k1z2d4wvfbrgxys998456b6stablevk9706hak5k1z2d4wvfbrgxys998456b6

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

需求内容自动检查技能(AI 驱动 + 智能引导)

⚠️ 重要规则(必须遵守)

🤖 执行方式:默认使用子代理

当用户要求检查需求文档时,必须

  1. 使用子代理执行(sessions_spawn with runtime="subagent")
  2. 不要直接用 exec 命令执行
  3. 执行完毕后自动关闭子代理(cleanup: "delete")

正确示例

sessions_spawn({
  runtime: "subagent",
  task: """请使用 requirement-checker 技能检查需求文档:
  目录:/Users/lifan/Downloads/requirementDocs/
  请执行:python3 ~/.openclaw/workspace/skills/requirement-checker/scripts/batch_check_ai.py
  """,
  timeoutSeconds: 600,
  cleanup: "delete"
})

错误示例

# ❌ 不要这样做
python3 check_requirement.py

🚀 快速使用

首次使用(智能引导)

请检查需求文档

智能体会引导你设置目录和 API 配置:

📁 需要设置需求文档检查目录

### 1️⃣  输入目录(需求文档所在目录)
推荐:~/Downloads/requirementDocs

### 2️⃣  输出目录(检查报告保存目录)
推荐:~/Downloads/requirementReports

### 3️⃣  API 配置(自动检测)
✅ 检测到 OpenClaw 配置(bailian)
✅ 配置已保存到 config.json

请告诉我:
- 使用默认配置
- 或自定义目录

已配置后(直接使用)

请检查需求文档

智能体会自动使用配置的目录执行检查。

需要汇总报告

请检查需求文档,并生成汇总报告

📋 检查内容

12 项规范检查

  1. 流程描述
  2. 改造内容标注
  3. 分项描述
  4. 元素完整性
  5. 交互逻辑
  6. 界面细节
  7. 算法公式
  8. 查询关联
  9. 异常处理
  10. 改造类型
  11. 原型附件
  12. GWT 验收标准

输出内容

  • ✅ 具体问题说明(引用原文)
  • ✅ 针对性优化建议(温柔话术:"如果能...会更好")
  • ✅ GWT 验收标准自动生成
  • ✅ 结论(🌟 质量优秀 / ✅ 通过 / 💡 建议自检 / 🔧 待改进)
  • ✅ 检查概览(不显示评分,避免不准确)
  • ✅ 汇总报告(可选)

🤖 执行方式

默认使用子代理执行

  • ✅ 自动创建子代理
  • ✅ 执行检查任务
  • ✅ 执行完毕后自动关闭

优势

  • 隔离执行环境
  • 不占用主会话资源
  • 适合长时间运行的批量检查

📁 目录配置

首次使用

智能体会主动引导你设置:

  1. 输入目录 - 需求文档所在目录
  2. 输出目录 - 检查报告保存目录

配置后

  • ✅ 配置保存到 config.json
  • ✅ 下次使用无需重复设置
  • ✅ 可随时修改配置

修改配置

请修改 requirement-checker 的输入目录

🔧 API 配置(v2.4 优化)

✨ 自动检测(推荐)

首次运行时自动检测并保存:

检测顺序

  1. 本地缓存(config.json)→ 最快
  2. 环境变量 → 通用方案
  3. OpenClaw 配置 → 自动扫描所有 provider
  4. 引导用户配置 → 简化交互

支持的 Provider(按优先级):

  1. bailian(阿里云百炼,性价比高)⭐
  2. moonshot(Kimi,长文本)
  3. openai(通用)
  4. zhipu(智谱)
  5. 其他自定义 provider

手动配置(可选)

如果自动检测失败,可选择:

方式 1:OpenClaw 配置(~/.openclaw/openclaw.json方式 2:环境变量(OPENAI_API_KEY, OPENAI_BASE_URL方式 3:运行时输入(保存到 config.json)


📁 输出位置

报告默认生成在配置的输出目录:

  • xxx_report.md - 单个文件检查报告
  • 00_汇总报告.md - 汇总报告(可选)
  • 检查结果.json - 结构化数据

Last Updated: 2026-04-03 (v2.4)

Files

22 total
Select a file
Select a file to preview.

Comments

Loading comments…