Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

碎片知识缝纫师

v1.0.2

碎片知识缝纫师 - 智能收集、关联发现与知识重组工具。当用户需要:(1) 将散落在各处的碎片信息(微信、网页、文档、会议记录)进行体系化整理;(2) 发现新内容与已有知识库的关联(概念相似、话题相关、逻辑延续);(3) 生成知识连接笔记,将相关碎片自动缝合;(4) 当某个主题积累足够多碎片时,自动生成大纲草案。触...

1· 168·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for teamoplum/fragment-stitcher.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "碎片知识缝纫师" (teamoplum/fragment-stitcher) from ClawHub.
Skill page: https://clawhub.ai/teamoplum/fragment-stitcher
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install fragment-stitcher

ClawHub CLI

Package manager switcher

npx clawhub@latest install fragment-stitcher
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (collect fragments, discover relationships, generate stitch notes and outlines) match the included scripts: collector, relationship_finder, stitch(stitcher), draft_generator. The code implements local collection, simple text analysis, similarity and relationship heuristics, and writing notes to a knowledge/ directory. No unrelated credentials, binaries, or external services are requested.
Instruction Scope
The SKILL.md and README instruct the agent to accept pasted text, file paths, screenshots, and URLs and to read/write under knowledge/. The code implements text and file reading; collect_from_url requires the HTML content be provided (it does not fetch URLs itself) and OCR for screenshots is not implemented (noted as TODO in code). The scripts read arbitrary file paths supplied to collect_from_file and will read and write files in the specified knowledge base — this is expected for this tool but means users should avoid pointing the skill at sensitive system directories or private data stores.
Install Mechanism
There is no install spec (instruction-only skill with bundled scripts). No remote downloads or installers are present. All code is local, and no external packages or network fetches are invoked by the provided scripts.
Credentials
The skill requests no environment variables, no credentials, and no config paths. The scripts do perform local file I/O only. There are no hard-coded secrets, external endpoints, or references to unrelated services in the code.
Persistence & Privilege
always is false and disable-model-invocation is false (normal). The skill writes files to a local knowledge/ directory under the current working directory (or to a user-specified knowledge base). It does not modify other skills or global agent configuration.
Assessment
This skill operates on local files and a local knowledge/ directory. Before installing or running: 1) Review and choose a dedicated knowledge_base path (do not point it to system folders or sensitive directories). 2) Do not pass paths to private secrets or system config files when using collect_from_file. 3) Be aware that OCR and automatic URL fetching are mentioned but not implemented — you must supply webpage content or use separate tools for screenshots/URLs. 4) The scripts write files named from timestamps, tags, and hashes — ensure tags are sanitized (avoid injecting path separators) and back up existing notes if needed. 5) There are minor bugs/rough edges (e.g., a datetime.timedelta reference bug and several TODOs) so run in a controlled environment and inspect outputs before trusting automated changes.

Like a lobster shell, security has layers — review code before you run it.

latestvk97d76ecj2gfxe96zmg1bjpz3d8399vm
168downloads
1stars
3versions
Updated 23h ago
v1.0.2
MIT-0

碎片知识缝约师 (Fragment Stitcher)

核心能力

1. 智能收集

用户可通过以下方式提供碎片内容:

  • 直接粘贴文本(文章片段、聊天金句、临时想法)
  • 上传截图(自动OCR提取文字)
  • 提供文件路径(读取文档内容)

提取字段:

  • 核心观点/关键信息
  • 来源(网页/微信/文档/会议)
  • 主题标签
  • 创建时间

2. 关系发现

分析新内容与已有知识库的关联:

关联类型检测方式示例
概念相似关键词重叠、语义相似"AI安全"与"模型对齐"
话题相关同一主题域多篇关于"产品增长"的笔记
逻辑延续前后文的承接关系需求文档→技术方案→实现记录
补充增强同一问题的不同角度正面案例+反面案例

3. 自动缝合

生成"知识连接笔记",格式:

📌 连接发现
来源: [新碎片]
关联: [已有知识]
关联点: [具体说明]

典型输出:
"您今天读的AI安全文章,与上周保存的'模型对齐'论文在第三章有共同假设"
"这条产品笔记,可以补充到您正在写的PRD文档的'风险章节'"

4. 渐进式成文

当某主题碎片 ≥ 5条时,提示用户可生成大纲草案:

📝 [主题名称] 大纲草案

## 已收集要点
- [要点1]
- [要点2]
...

## 建议结构
1. [第一章]
2. [第二章]
...

## 待补充
- [缺失的关键信息]

工作流程

Step 1: 接收碎片

询问用户碎片内容或来源。可批量接收多条碎片。

Step 2: 提取与存储

将碎片保存到 knowledge/fragments/ 目录:

  • 命名格式: YYYY-MM-DD-[序号]-[主题].md
  • 元数据: 日期、来源、标签、关联碎片ID

Step 3: 关系发现

扫描现有碎片库,找出潜在关联:

  1. 读取 knowledge/ 目录下的现有碎片
  2. 计算与新碎片的相似度
  3. 列出TOP 3关联碎片及关联理由

Step 4: 生成连接笔记

如有关联,生成连接笔记保存到 knowledge/connections/

Step 5: 主题聚合检查

检查各主题的碎片数量,如达到阈值,提示生成大纲

存储结构

knowledge/
├── fragments/      # 原始碎片
│   └── 2024-01-15-01-AI安全.md
├── connections/    # 连接笔记
│   └── 2024-01-15-AI安全-模型对齐.md
├── outlines/       # 大纲草案
└── index.md        # 碎片索引

使用示例

用户说: "帮我整理一下最近学的AI知识"

回复: 好的!让我先看看你目前已有哪些碎片知识。请提供你想整理的内容,或者告诉我来源(如某个文件夹、网页收藏等),我来帮你:

  1. 提取核心信息
  2. 发现关联
  3. 生成知识连接

用户说: "这条笔记可以和之前的'产品MVP'笔记关联起来"

回复: 收到!让我扫描知识库,找出最佳关联点,然后生成连接笔记。

Comments

Loading comments...