Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Web Scraper Pro

v1.1.0

网页数据爬虫 - 数据抓取、表格导出、定时采集

0· 30·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description, README, and SKILL.md all describe web scraping, CSV/Excel export, image download and scheduling; the CLI commands shown (clawhub scrape ...) are coherent with that purpose. However _meta.json advertises a 'curl' dependency while the registry metadata reported no required binaries, which is an unexplained mismatch.
Instruction Scope
SKILL.md only instructs the agent to run clawhub CLI commands with user-supplied URLs/selectors and to schedule jobs. It does not direct reading unrelated files or env vars. Gaps: it doesn't state where scraped data/images are stored or whether scheduling uses local cron vs. a remote service — this ambiguity could hide data exfiltration or cloud uploads.
!
Install Mechanism
There is no formal install spec in the registry (instruction-only skill), but README recommends 'npx clawhub@latest install web-scraper-pro' which would fetch code at runtime from the npm registry. That discrepancy (no install spec vs README npx instruction) increases risk because following README would pull remote code; users should verify the provenance of the 'clawhub' package before running npx.
Credentials
The skill declares no required env vars or credentials, which is reasonable for a simple scraper. However the SKILL and README reference Pro/subscription/cloud storage features without declaring any credential or endpoint — if those features are used at runtime they may require tokens or perform network storage not described here.
Persistence & Privilege
always is false and there are no declared config paths or permanent privileges. The skill does include scheduling functionality in its commands, but there is no evidence it would force-enable itself or modify other skills.
What to consider before installing
This skill appears to do what it says (scrape pages, export tables), but there are a few red flags to check before installing or running it: - Verify the 'clawhub' CLI: SKILL.md expects a 'clawhub' command. The README's 'npx clawhub@latest install web-scraper-pro' would download code from npm — only run that if you trust the package owner and have inspected the package contents. - Confirm the dependency mismatch: _meta.json lists 'curl' while registry metadata lists none; ensure required binaries are present and legitimate. - Ask where data goes: clarify whether scraped data/images are stored locally or uploaded to a cloud service (and which endpoint). If cloud upload or paid 'Pro' features are used, check for required credentials and privacy policies. - Be cautious with scheduling: scheduling commands may persist jobs; verify whether scheduling runs locally or registers jobs on a remote service. - If you plan to run this on sensitive systems or with sensitive pages, first test in a sandbox and inspect any installed code (npm package or clawhub CLI) for unexpected network calls or credential use. If the publisher or source cannot be verified, treat the README 'npx' install instruction as the primary risk and avoid running it without code review.

Like a lobster shell, security has layers — review code before you run it.

latestvk9716ac7r1w82tysbeg6m1pm018511y7

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments