Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Spec Engine

v3.0.0

项目规格自动生成与验证工具 — 从想法到任务清单的全流程自动化。 支持:(1) 智能生成 spec (2) 可配置验证评分 (3) 自动拆解子任务 (4) Web 仪表盘 (5) 版本对比 (6) 历史分析

0· 96·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for yj85814/spec-engine.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Spec Engine" (yj85814/spec-engine) from ClawHub.
Skill page: https://clawhub.ai/yj85814/spec-engine
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install spec-engine

ClawHub CLI

Package manager switcher

npx clawhub@latest install spec-engine
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill claims spec generation/validation/decomposition/dashboard/compare/analyze and the repo contains corresponding scripts (generate.py, validate.py, decompose.py, dashboard.py, compare.py, analyze.py). However, there are additional 'daily_news' and 'collectors/*' modules (bilibili, github_oc, clawhub_oc, xiaohongshu) that scrape external sites and produce news reports; those collectors are not described in the SKILL.md command table. Collectors could be related to 'historical analysis' or dashboard enrichment, but their presence is extra capability that a user might not expect from a 'Spec Engine' alone.
!
Instruction Scope
The scripts perform broad actions: analyze.py and other tools walk directories and read .md files (os.walk, read_file), potentially scanning arbitrary paths. analyze.py's default --dir value points to a relative path '.../teams/shared/specs' (hard-coded default) which implies reading shared/team directories if present. The collectors perform network requests to multiple external services (api.bilibili.com, api.github.com, clawhub.ai, DuckDuckGo/xiaohongshu scraping). Running the provided commands without restricting directories or network access could expose internal spec files and transmit gathered data off-host. SKILL.md does not warn about these behaviors or the default scan path.
Install Mechanism
There is no install spec — the package is instruction/code-only and nothing is downloaded during install. All functionality is provided by included Python scripts using the standard library (with optional requests if available). No remote archive download or unusual installer behavior was found in the provided files.
Credentials
The skill declares no required env vars or credentials, and none are required to call public APIs. However, the code reads proxy environment variables (HTTP_PROXY/HTTPS_PROXY) and may use network access. The analyzer default directory implies access to team/shared paths on disk which is not declared or explained. No credentials (tokens/keys) are requested, which is proportional, but the implicit ability to read local markdowns and make network calls increases data-exposure risk.
Persistence & Privilege
The skill does not request permanent inclusion (always:false). It does not appear to modify other skills or global agent configuration. It writes reports to files when run (save_report/save_json_report) but does not attempt to alter system-wide settings.
Scan Findings in Context
[unicode-control-chars] unexpected: The pre-scan detected unicode control characters in SKILL.md. This is unusual for a README/instructions file and can be used to obfuscate or attempt prompt-injection against language models. Even if harmless, you should inspect SKILL.md raw bytes for invisible characters before allowing automated LLM use.
What to consider before installing
What to consider before installing or running: - Inspect SKILL.md and the Python scripts locally (you already have them). Look specifically at analyze.py and daily_news.py: they walk directories and read .md files and have a default directory that may point to a shared/team path — don’t run them with defaults unless you want those directories scanned. - The collectors make outbound HTTP requests to public APIs and search engines; if you run these on a machine with sensitive network access, they will contact external servers. Run them in a sandboxed/container environment if you are unsure. - Remove or override default directory arguments (use --dir) to limit filesystem exposure. Prefer running with a dedicated test folder first. - Check for invisible/unicode-control characters in SKILL.md (the pre-scan flagged them) and remove them if present before using this skill with an LLM, since such characters can attempt to manipulate prompt parsing. - There are no declared secret/env requirements, but proxy env vars are used if present — review environment variables before running. - If you plan to let an agent invoke this skill autonomously, be cautious: the combination of filesystem scanning and outbound network calls increases the risk of unintended data leakage. Consider disabling autonomous invocation or restricting the skill to manual use until you’re comfortable with its behavior. If you want, I can: (a) list the exact lines where the default path and network calls occur, (b) show how to run the analyzer safely (example CLI flags), or (c) produce a sanitized version of SKILL.md with control characters removed.

Like a lobster shell, security has layers — review code before you run it.

latestvk971tz0cvw3k3bg6ctt6x4k85d83x4rb
96downloads
0stars
1versions
Updated 4w ago
v3.0.0
MIT-0

spec-engine v3.0

项目规格自动生成与验证工具 — 从想法到任务清单的全流程自动化

功能一览

命令说明
generate输入项目描述 → 智能提取 → 生成完整 spec
validate验证 spec 完整性,100分制评分 + A/B/C/D 等级
decompose从 spec 中提取功能 → 自动拆解子任务 + 工时 + 依赖 + 负责人
analyze扫描目录,分析历史 spec 数据统计
dashboard生成 Web 仪表盘,可视化所有 spec 状态
compare对比两个 spec 版本的差异

快速开始

generate — 智能生成 spec

python scripts/generate.py -i <项目描述文件> [-o spec.md] [--format brief|detailed]

自动识别技术栈、推断文件结构、估算时间、识别风险。

validate — 可配置验证

python scripts/validate.py <spec文件> [--rules rules.json] [--strict] [--json]

支持自定义规则、4维检查、100分评分制。

decompose — 任务拆解(v3.0 新增)

python scripts/decompose.py -i <spec文件> [-o tasks.md] [--format table|list] [--json]

从 spec 中提取功能需求,自动拆解为子任务清单,含工时估算、依赖关系、负责人建议、关键路径分析。

analyze — 历史分析

python scripts/analyze.py [--dir <目录>] [--output report.md] [--json]

dashboard — Web 仪表盘(v3.0 新增)

python scripts/dashboard.py [-d <目录>] [-o dashboard.html]

生成深色主题 HTML 仪表盘,展示所有 spec 的评分、技术栈分布、完整性状态。

compare — 版本对比

python scripts/compare.py <旧spec> <新spec> [--json]

文件结构

spec-engine/
├── SKILL.md
├── scripts/
│   ├── generate.py      # 智能 spec 生成
│   ├── validate.py      # 可配置验证
│   ├── decompose.py     # 任务拆解(v3.0)
│   ├── analyze.py       # 历史分析
│   ├── dashboard.py     # Web 仪表盘(v3.0)
│   └── compare.py       # 版本对比
└── templates/
    ├── spec-template.md # spec 模板
    └── rules.json       # 验证规则配置

特点

  • 纯 Python 标准库,零外部依赖
  • UTF-8 编码,跨平台兼容
  • 完全向后兼容 v1/v2

适用场景

  • Agent 团队写项目 spec → generate
  • 提交前检查 → validate
  • spec 确认后拆任务 → decompose
  • 团队复盘 → analyze + dashboard

Comments

Loading comments...