Back to skill
Skillv1.0.0
ClawScan security
TopHotCN · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignMar 10, 2026, 6:52 AM
- Verdict
- benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's files and runtime instructions match its claimed purpose (fetch hot lists from tophub.today and optionally fetch article content with crawl4ai); there are no unexplained credential requests or hidden endpoints.
- Guidance
- This skill is internally consistent with its stated purpose. Before installing/running: (1) be aware pip will install packages and Playwright will download Chromium — run in a controlled/sandboxed environment if you don't trust network installs; (2) crawl4ai will fetch arbitrary URLs over the network — review crawl4ai's trust model and privacy policy (it may perform additional network calls) and ensure you comply with target sites' terms/robots; (3) outputs and a cache file are written locally (site_contents by default, plus tophub_hot.json in the scripts dir) — check and clean those directories as needed; (4) the SKILL.md has a minor path formatting typo ({baseDir} repeated) but this is documentation-only. If you need higher assurance, inspect crawl4ai package source and run the scripts in an isolated VM or container.
Review Dimensions
- Purpose & Capability
- okName/description (抓取 tophub.today 热榜并可抓正文) align with the included scripts: tophub_spider.py fetches hot list entries (title/desc/url) from tophub.today and writes JSON; fetch_site_content.py uses crawl4ai to fetch article Markdown. No unrelated services or credentials are requested.
- Instruction Scope
- noteSKILL.md instructs installing pip packages and running playwright to install chromium (expected for crawling). Minor documentation glitches (repeated {baseDir}/scripts path strings) are present, but runtime instructions and scripts do not attempt to read unrelated system files or exfiltrate data. Note: fetch_site_content.py uses the crawl4ai library which performs network fetches of article URLs (expected for its purpose).
- Install Mechanism
- noteNo install spec in registry; SKILL.md tells the agent to pip install crawl4ai, requests, tqdm, pypinyin and run 'python -m playwright install chromium'. This is a normal runtime dependency flow for a crawler but will download packages and a Chromium binary — moderate risk if run untrusted. No suspicious download URLs or extract operations in the package.
- Credentials
- okThe skill requests no environment variables, keys, or config paths. The code performs only network calls to tophub.today and the target article URLs (via crawl4ai); no secrets or unrelated credentials are required.
- Persistence & Privilege
- okalways is false and the skill does not request system-wide persistence or modify other skills. It writes local cache and output files under the script/output directories (expected behaviour).
