Back to skill
Skillv2.0.3

ClawScan security

Novel Scraper Pro · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignApr 4, 2026, 1:08 PM
Verdict
benign
Confidence
high
Model
gpt-5-mini
Summary
The skill's code and runtime instructions are coherent with a novel-scraper tool: it uses curl/BeautifulSoup (and an optional 'openclaw browser' CLI) to fetch pages, saves progress and outputs TXT files, and does not request unrelated secrets or install arbitrary remote code.
Guidance
This skill appears to be what it says: a local novel scraper that uses curl/BeautifulSoup and optionally an 'openclaw browser' CLI for SPA sites. Before installing, consider: 1) it will perform network requests to arbitrary URLs you pass (respect site terms of service); 2) it invokes subprocesses (curl and openclaw browser) — ensure you trust those CLIs and the environment where it will run; 3) it reads /proc/meminfo and writes files under ~/.openclaw/workspace and /tmp, including a progress.json you may delete to reset state; 4) there are small inconsistencies (some filenames/paths and version strings differ across files, e.g., fetch_catalog writes to a 'novel-scraper' path while other scripts use 'novel-scraper-pro') — these are not security-critical but may cause confusion or require you to adjust paths; 5) run the skill in an isolated environment or sandbox if you plan to scrape untrusted websites. If you need higher assurance, inspect or run the included scripts locally in a safe environment and verify the behavior of the 'openclaw browser' CLI before enabling SPA mode.

Review Dimensions

Purpose & Capability
okName/description match the code and SKILL.md: all scripts implement scraping, catalog fetching, URL extraction and merging. Required capabilities (curl, BeautifulSoup or bs4) are consistent with the stated purpose. No unrelated credentials or tools are requested.
Instruction Scope
noteRuntime instructions tell the agent to run the included Python scripts which read/write files under ~/.openclaw/workspace and /tmp, call curl and (optionally) an 'openclaw browser' CLI, and read /proc/meminfo for memory checks. These actions are within scope for a scraper, but the skill will perform network requests and create/modify local files (including progress/state).
Install Mechanism
okNo install spec that downloads code at install time; dependencies are standard Python packages (beautifulsoup4/bs4) listed in requirements.txt. The project runs subprocess curl/CLI commands rather than installing arbitrary binaries from remote URLs.
Credentials
okThe skill requests no environment variables or external credentials. It reads local system info (/proc/meminfo) and writes state/cache under the user's home (~/.openclaw and /tmp) which is expected for progress caching and output files.
Persistence & Privilege
okalways is false and the skill does not request platform-level persistence. It writes its own state/progress files in its workspace but does not modify other skills or global agent settings.