Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Bilibili Up To Kb

v0.1.0

Convert Bilibili (B站) videos into a searchable text knowledge base. Supports single videos and batch processing of entire UP主 channels. Uses local whisper.cp...

1· 325·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description match what the scripts do: download Bilibili videos, run whisper.cpp locally, clean via an LLM-style tool, and build a KB. The skill does not declare required env vars in registry metadata even though SKILL.md and scripts reference WHISPER_CLI, WHISPER_MODEL, OPENCODE_BIN, CLEAN_MODEL and optional GEMINI_API_KEY and browser cookies. That mismatch is unexpected but not necessarily malicious.
!
Instruction Scope
The scripts perform exactly the data flows described (yt-dlp → ffmpeg → whisper → clean with opencode → index). However, they optionally use --cookies-from-browser to access member-only content (this reads browser cookies via yt-dlp) and they feed transcript chunks into the opencode CLI. If opencode.run or the chosen CLEAN_MODEL execute remotely or fetch from a remote model hub, transcripts will be sent to a network service. The SKILL.md gives broad discretion (batching, auto-chunking) but the main risk is exfiltration of transcript text via third-party model/CLI or cloud LLM keys if configured (GEMINI_API_KEY referenced in docs).
Install Mechanism
There is no automated install spec (instruction-only), so nothing is dropped automatically. The references recommend downloading whisper models from Hugging Face or a mirror (hf-mirror.com) via curl — these are expected but are external downloads the user must trust. Because the skill doesn't auto-extract or run arbitrary remote payloads, installation risk is moderate but depends on which model/CLI the user chooses to install.
!
Credentials
Registry metadata lists no required credentials, which aligns with local whisper usage, but scripts and docs reference optional sensitive inputs: --cookies-from-browser (access to browser cookies), GEMINI_API_KEY (for an alternate summarize tool), and environment variables pointing at opencode and whisper binaries. Requesting browser cookies or an LLM API key is proportionate only for gated content or LLM-based cleaning — these are optional but sensitive. The skill does not require unrelated cloud credentials, so the issue is more about optional sensitive inputs that could expose transcripts to external services.
Persistence & Privilege
The skill is user-invocable, not always-enabled, and does not request persistent platform privileges. Scripts operate in working directories and temporary folders; they do not modify other skills or system-wide settings.
What to consider before installing
This skill appears to do what its description says, but before running it consider: 1) Don't pass browser cookies unless you trust the environment — that option can expose other site cookies from your browser to yt-dlp. 2) Confirm how the opencode CLI and the CLEAN_MODEL operate: if opencode runs inference remotely or downloads models at runtime, your transcripts will be sent to an external service. If you need privacy, ensure opencode is configured to run locally or disable cleaning. 3) Only download whisper models from trusted hosts (official GitHub or huggingface with verified URLs); be cautious using third‑party mirrors. 4) If you plan to use any cloud LLM key (GEMINI_API_KEY) for cleaning, assume transcripts will be sent to that provider. 5) Run the scripts in a sandboxed environment (container or VM) for initial tests and inspect network traffic if you are concerned about exfiltration. Providing confirmation from the author that opencode/minimax runs fully offline (or documentation showing local-only behavior) would increase confidence.

Like a lobster shell, security has layers — review code before you run it.

latestvk97aj1r38tn640qfvv9m2zrz7d8206fz

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Bilibili UP to KB

Convert B站 videos (single or entire channels) into cleaned, structured text knowledge bases.

Design Principle

Agent orchestrates, scripts execute. The agent's job is to decide WHAT to do and kick off the right script. All mechanical, repetitive work (downloading, transcribing, cleaning) is handled by shell scripts with built-in parallelism. The agent NEVER loops through videos one by one — it runs ONE command and the script handles concurrency internally.

Output Structure

kb/UP主名_UID/
├── BV号_视频标题.txt          # Cleaned transcript (user-facing)
├── BV号_视频标题.meta.json    # Video metadata
├── index.md                   # Summary index
└── .raw/                      # Hidden: whisper transcripts (if any)
    └── BV号_视频标题.txt

Key decisions:

  • File names include title for readability (BV1xxx_标题.txt)
  • Folder includes UP主 name (UP主名_UID/)
  • Raw transcripts hidden in .raw/
  • No _clean suffix — clean files are the main files
  • Per-video .meta.json with title, uploader, duration, etc.

Full Pipeline

Step 1: Download AI subtitles (fast, high concurrency OK)

# 30-50 concurrent is fine — B站 CDN handles it
scripts/batch_channel.sh "https://space.bilibili.com/UID/" ./kb/output zh 0 30

Step 2: For videos without AI subtitles, run whisper (LOW concurrency!)

# Metal GPU can only handle 1-4 parallel whisper instances
# More = slower total (GPU saturation)
scripts/batch_channel.sh "https://space.bilibili.com/UID/" ./kb/output zh 0 2 --whisper-only

Step 3: Clean + Index

# Clean whisper transcripts (AI subtitles skip automatically)
scripts/batch_clean.sh ./kb/UP主名_UID/
scripts/generate_index.sh ./kb/UP主名_UID/

Concurrency Guide

Critical: Different stages need different concurrency!

StageBottleneckRecommendedWhy
AI subtitle downloadNetwork30-50B站 CDN handles high parallel
Whisper transcribeMetal GPU1-4GPU饱和,多了反而慢
Transcript cleaningAPI rate limitALL (0)Network I/O only

Quick Start — Single Video

scripts/transcribe.sh "https://www.bilibili.com/video/BV..." ./output zh

Transcript Cleaning

AI subtitles are clean enough — skipped by default.

SourceCleaning needed?
B站 AI subtitlesNo — directly usable
whisper fallbackYes — goes through cleaning

Cleaning uses opencode/minimax-m2.5-free:

  1. Fix homophones and garbled words
  2. Add punctuation
  3. Output MUST be Simplified Chinese
  4. Keep uncertain proper nouns unchanged
  5. Never substitute one real term for another

Chunk size: 80 lines. Retry: 3 attempts with 3s delay.

⚠️ Long-running tasks

Use nohup to avoid session compaction killing processes:

nohup bash scripts/batch_clean.sh ./kb/UP主名_UID/ 0 80 > /tmp/clean.log 2>&1 &

batch_clean.sh is resumable — safe to re-run after interruption.

⚠️ Large Channel Handling (1000+ videos)

Script auto-detects large channels (>800 videos) and fetches in chunks to avoid timeout.

# Auto-chunked, just re-run to resume
nohup bash scripts/batch_channel.sh "https://space.bilibili.com/UID/" ./kb/output > /tmp/batch.log 2>&1 &

If still fails, manually fetch URL list:

for i in $(seq 1 500 2000); do
  yt-dlp --flat-playlist --playlist-start $i --playlist-end $((i+499)) \
    --print url "https://space.bilibili.com/UID/" >> /tmp/urls.txt
done
cat /tmp/urls.txt | xargs -P 20 -I {} bash scripts/transcribe.sh {} ./kb/OUTPUT zh

⚠️ Thermal & Fan Warning

Keep system cool — avoid fan spin!

StageRiskMitigation
Whisper (GPU)HIGHKeep concurrency ≤2, monitor temps
AI subtitle downloadLowCan run 30-50 concurrent
Cleaning (API)NonePure network I/O, no local load

If fans start spinning:

  • Stop whisper processes immediately
  • Wait for cooldown
  • Resume with lower concurrency (1-2)
# Check GPU temp (if using CUDA)
nvidia-smi

# Check Mac CPU/GPU temp
sudo powermetrics --sample-rate 1000 -i 1 -n 1 | grep -E "CPU|GPU"

Dependencies

Required: yt-dlp, ffmpeg, whisper.cpp (+ model), opencode CLI Optional: Browser cookies for member-only content (--cookies-from-browser chrome)

Environment Variables

VariableDefaultDescription
WHISPER_CLIwhisper-cliPath to whisper.cpp
WHISPER_MODEL~/.whisper-cpp/ggml-small.binWhisper model
OPENCODE_BIN~/.opencode/bin/opencodeopencode CLI
CLEAN_MODELopencode/minimax-m2.5-freeCleaning model

Tips

  • China users: Use hf-mirror.com for whisper model
  • Long videos (1h+): Auto-segmented into 10-min chunks
  • Resumable: All batch scripts skip already-processed files

Files

7 total
Select a file
Select a file to preview.

Comments

Loading comments…