Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Web Crawler

v1.0.0

网页爬虫工具,支持静态和动态页面爬取、媒体下载、反爬虫规避。激活条件:用户提到爬虫、爬取、crawler、scraper、抓取网页、下载媒体

0· 112·0 current·0 all-time
by噢福阔斯KANG@jinkang19940922
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The described capability (static/dynamic crawling, media download, anti-bot) matches the SKILL.md content. However the skill requires a local Node module ('./src/index.js'), Puppeteer and a system Chrome binary, and expects config files under the workspace — none of these artifacts or required binaries are declared in the registry metadata. That mismatch suggests incomplete packaging or sloppy metadata.
!
Instruction Scope
The SKILL.md instructs the agent to cd into /home/node/.openclaw/workspace/web-crawler, require local code, read config/default.json, use proxy lists (including hardcoded 192.168.x.x addresses), and write scraped HTML/media/screenshots into outputs/. Those are file-system and network operations that go beyond a simple instruction: they create persistent output directories and rely on local binaries and proxies. The skill does not include safeguards or explain consent/permissions for writing or large data downloads.
Install Mechanism
There is no install spec (instruction-only), which is low-risk in general. But because the instructions expect Node/Puppeteer/Chrome and local source files, the absence of an install step is an inconsistency: a consumer would need to manually install dependencies and supply the missing code and browser, increasing the chance of misconfiguration or supply-chain risk.
!
Credentials
No environment variables or credentials are declared, yet the skill expects proxy configuration (antiBot.proxyList) and access to system browser executables and the workspace filesystem. Hardcoded proxies pointing at private IPs are suspicious (they may route traffic through an internal host). The skill will download media and write structured data locally, which could be used for large-scale scraping or exfiltration if misused.
Persistence & Privilege
The skill does not request always:true and does not declare changes to other skills or system-wide settings. It will create output files under its workspace, which is normal for a crawler, but that is not a platform-level persistence privilege.
What to consider before installing
Do not enable or run this skill until the author/source is verified and missing pieces are resolved. Ask for: (1) source code (the referenced ./src/index.js and package.json), (2) an install spec or explicit list of runtime dependencies (Node version, Puppeteer, Chrome) and how Puppeteer/Chrome will be provided, (3) explanation and justification for the hardcoded proxies (why those IPs, who controls them), and (4) where outputs are stored, retention policy, and limits on media downloads. If you must test it, run it in an isolated VM or sandbox with restricted network access, remove/replace hardcoded proxies, and avoid granting broad autonomous invocation or access to sensitive internal networks. Because the package is instruction-only and inconsistent, proceed cautiously.

Like a lobster shell, security has layers — review code before you run it.

latestvk970vx9na9w72q8ejphpxtsem584qg56

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🕷️ Clawdis

Comments