Back to skill
Skillv0.2.0

ClawScan security

Web Reader · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignMar 14, 2026, 8:06 AM
Verdict
Benign
Confidence
high
Model
gpt-5-mini
Summary
The skill is internally consistent with its description: it fetches web articles/videos, saves them to disk, and supports post-processing; nothing in the files or instructions appears to perform unexpected exfiltration or require unrelated credentials.
Guidance
This skill appears to do what it claims: fetch articles/videos, convert to markdown, download images, and archive to disk. Before installing/use: 1) Note it will write files under the output/archive directory you provide (and reads ~/.claude/web-reader.json if present); choose a safe test directory first. 2) To fetch some protected content (Feishu, private Bilibili), the skill may use your browser cookies or run a headless browser that includes credentials — only proceed if you trust the environment and understand this will access those cookies. 3) Dependencies (scrapling, camoufox, yt-dlp, html2text) are installed/used at runtime — review and install them from official sources. 4) If you will archive sensitive documents, inspect fetcher.py and lib/* yourself (they are included) to confirm behavior. If you want extra assurance, run the CLI manually on a non-sensitive URL and review the created files before enabling any automated or autonomous usage.

Review Dimensions

Purpose & Capability
okName/description (web/article/video fetcher + archive/summary) matches the code and runtime instructions. Declared behavior (scrapling/camoufox/yt-dlp, markdown conversion, image localization, feishu virtual scroll) is implemented in the included modules.
Instruction Scope
noteSKILL.md and code direct the agent to read a skill-specific config at ~/.claude/web-reader.json, run local CLI tools (scrapling, yt-dlp, camoufox), and write archived markdown + images to the filesystem — all consistent with the stated purpose. Note: the skill will use browser fetch with credentials for Feishu images (page.evaluate(... fetch(..., {credentials:'include'}))), which legitimately enables downloading authenticated content but can access cookies in the browser context if used.
Install Mechanism
okNo install spec is included (instruction-only install). The package relies on common third‑party Python packages and external CLIs; the code checks dependencies and prints pip install hints. No remote download/execute URLs or opaque installers found in the repo.
Credentials
noteThe skill requests no environment variables or primary credentials, which is appropriate. However it can (optionally) use browser cookies (--cookies-from-browser / page fetch with credentials) to access protected content — this exposes local browser cookie data to yt-dlp or the headless browser session (expected for accessing private content, but sensitive). Also it reads a user config file (~/.claude/web-reader.json) referenced in SKILL.md (not declared elsewhere) to determine archive_dir.
Persistence & Privilege
okalways is false and the skill does not request persistent/privileged platform-level presence or attempt to modify other skills. It writes archives to user-specified filesystem paths (expected behavior).