kb-digest

v1.2.0

知识提炼器:任意链接/PDF/文字,一条命令提炼成结构化知识卡片。支持生成摘要、Q&A、思维导图、播客脚本。当用户想消化文章、研究论文、整理信息、做知识管理时触发。

0· 122·0 current·0 all-time
byvine.xio@vineindalvik

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for vineindalvik/kb-digest.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "kb-digest" (vineindalvik/kb-digest) from ClawHub.
Skill page: https://clawhub.ai/vineindalvik/kb-digest
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install kb-digest

ClawHub CLI

Package manager switcher

npx clawhub@latest install kb-digest
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill's name/description (extract structured knowledge from URLs/PDF/text) aligns with the code and SKILL.md. Minor metadata mismatches: registry metadata lists no required env/binaries while SKILL.md declares python3 and three OpenClaw-injected LLM env vars; SKILL.md version (1.1.0) differs from registry version (1.2.0). These look like packaging/metadata inconsistencies rather than malicious intent.
Instruction Scope
SKILL.md and handler.py instruct the agent to fetch web pages or PDFs, extract text, then call an LLM endpoint with that text. The runtime only reads a local .env (optional), command-line args, and the specified input files/URLs. There are no instructions to read unrelated system files or to transmit data to endpoints other than the configured LLM base URL and an optional Feishu webhook.
Install Mechanism
No install spec in registry; it's an instruction-only skill with requirements.txt and a recommendation to pip install. This is low-risk compared with arbitrary downloads. The pip packages requested (requests, pypdf, markdownify, python-dotenv) are appropriate for the described functionality.
Credentials
The skill requires an LLM API key/base URL/model (OPENCLAW_LLM_*) which are necessary for its behavior. The optional FEISHU_WEBHOOK_URL for pushing results is proportional. It does not request unrelated credentials or system config paths.
Persistence & Privilege
always is false and the skill does not attempt to modify other skills or system-wide configs. It only reads/writes files it is explicitly given (e.g., saving outputs) and an optional local .env file in the skill directory.
Assessment
This skill is coherent with its stated purpose, but check a few practical points before running: (1) The skill sends extracted content to whatever LLM base URL and API key are configured — do not send sensitive or confidential documents unless you trust the model endpoint. (2) The SKILL.md expects OpenClaw to inject OPENCLAW_LLM_* variables; verify the platform will provide them or pass overrides via CLI. (3) The package suggests installing dependencies via pip; review requirements.txt and run in a virtual environment. (4) The registry metadata/version mismatch and the SKILL.md declarations are minor packaging issues — if you need high assurance, ask the publisher for an authoritative homepage or audit the repository. (5) If you plan to enable Feishu push, ensure the FEISHU_WEBHOOK_URL points to a trusted webhook (webhooks can receive whatever output the skill sends).

Like a lobster shell, security has layers — review code before you run it.

latestvk979m6w6qqnrrcfx2kxgractqx84qf8x
122downloads
0stars
2versions
Updated 2w ago
v1.2.0
MIT-0

kb-digest

任何内容 → 结构化知识卡片。扔进去,出来就能用。

快速开始

cd /path/to/kb-digest
pip install -r requirements.txt
python handler.py --url "https://example.com/article"

OpenClaw 会自动注入 OPENCLAW_LLM_API_KEYOPENCLAW_LLM_BASE_URLOPENCLAW_LLM_MODEL,无需手动配置。

命令

# 从 URL 生成知识卡片(默认输出)
python handler.py --url "https://arxiv.org/abs/1706.03762"

# 从 PDF
python handler.py --pdf paper.pdf

# 从文字粘贴
python handler.py --text "把这段内容结构化..."

# 指定输出格式
python handler.py --url "..." --output card      # 知识卡片(默认)
python handler.py --url "..." --output mindmap   # 思维导图(Markdown)
python handler.py --url "..." --output qa        # Q&A 问答对
python handler.py --url "..." --output podcast   # 播客对话脚本
python handler.py --url "..." --output summary   # 纯摘要

# 保存到文件
python handler.py --url "..." --save output.md

# 推送到飞书(需设置 FEISHU_WEBHOOK_URL)
python handler.py --url "..." --push feishu

# 批量处理
python handler.py --batch urls.txt

输出示例(知识卡片)

📚 知识卡片 | Attention Is All You Need

💡 一句话
  用纯自注意力机制替代 RNN/CNN,开创 Transformer 架构。

🔑 核心要点
  1. Self-Attention 允许序列中任意位置直接交互,无需逐步传递
  2. Multi-Head Attention 从多个子空间捕捉不同语义关系
  3. Positional Encoding 以正弦波注入位置信息
  4. 训练速度比 RNN 快 8 倍(可并行化)

❓ Q&A
  Q: 为什么比 RNN 快?
  A: RNN 必须串行处理,Transformer 全序列并行计算

🧠 思维导图
  Transformer
  ├── Encoder ×6
  │   ├── Multi-Head Self-Attention
  │   └── Feed-Forward Network
  └── Decoder ×6
      ├── Masked Self-Attention
      ├── Cross-Attention(看 Encoder)
      └── Feed-Forward Network

🔗 延伸阅读
  BERT | Vision Transformer (ViT)

来源: https://arxiv.org/abs/1706.03762
生成: 2026-04-11 18:05

环境变量

变量说明必填
OPENCLAW_LLM_API_KEY大模型 API Key✅(OpenClaw 自动注入)
OPENCLAW_LLM_BASE_URL大模型 API 地址✅(OpenClaw 自动注入)
OPENCLAW_LLM_MODEL大模型名称✅(OpenClaw 自动注入)
FEISHU_WEBHOOK_URL飞书推送 Webhook推送用

支持的输入格式

  • URL: 网页文章、arXiv 论文、GitHub README
  • PDF: 研究论文、报告、书籍章节
  • 文本: 直接粘贴任意文字
  • 批量: 一个文件列出多个 URL,逐条处理

Comments

Loading comments...