Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

OpenClaw Scrapling

v1.0.0

Advanced web scraping with anti-bot bypass, JavaScript support, and adaptive selectors. Use when scraping websites with Cloudflare protection, dynamic conten...

0· 972·7 current·7 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the code and docs: scrape.py, examples, requirements.txt and skill.json all implement a scraper with stealth/dynamic/adaptive features. Declared required binaries (python3, pip) and Python package dependency (scrapling) are appropriate for the described functionality. Minor version differences in metadata (>=0.3.0 vs >=0.4.0) are not a red flag by themselves.
Instruction Scope
SKILL.md and scrape.py instruct the agent to run local scraping commands and to store sessions/selectors in the skill directory. The instructions allow scraping arbitrary URLs (external or internal), performing logins (username/password passed as CLI args), saving session files, screenshots, and writing outputs. They do not instruct reading unrelated system files or environment variables, but they do permit sending credentials via CLI args and persisting session tokens/cookies to disk.
Install Mechanism
There is no built-in install spec in the registry entry, but the repo includes requirements.txt and documented install steps that call 'pip install -r requirements.txt' and 'scrapling install' which will download browser binaries (~500MB). This is expected for a browser-driven scraper; no obscure external download URLs or shorteners are used in the package itself. Browser downloads will occur at runtime when the helper command is run.
Credentials
The skill declares no required environment variables or credentials, which fits the described purpose. However, the tool accepts credentials via CLI arguments (username/password) and will persist session state (session files and selector_cache.json) in the skill directory. Those behaviors are reasonable for a scraper but mean the skill can store sensitive tokens/credentials if provided.
Persistence & Privilege
always:false (no forced installation). The skill writes files into its own directory (sessions/, selector_cache.json) and downloads browsers into standard caches during 'scrapling install'. This is normal for this class of tool but means data and cookies will persist on disk under the skill and browser cache directories unless cleaned.
Assessment
This package is internally coherent for an advanced web scraper — it will run Python code, download browser binaries, and can access arbitrary URLs (including internal network addresses). Before installing: - Only install if you trust the source (GitHub repo) and you accept that the skill will run code and download ~500MB of browser binaries. - Expect session cookies and selector caches to be written under the skill directory (~/.openclaw/skills/scrapling/sessions and selector_cache.json). Remove those files if they may contain sensitive tokens. - Do not pass secrets or site credentials to the tool unless you trust it and the host environment; CLI args (username/password) are stored only if you save sessions. - If you are concerned about exfiltration or internal network access, run the skill in a restricted environment (network policies, sandbox, or VM) and inspect scrape.py and the installed scrapling package source before use. - If you need to ensure minimal privilege, avoid enabling stealth/dynamic modes that start a browser or save sessions, and prefer one-off basic HTTP fetches with explicit safe target URLs.

Like a lobster shell, security has layers — review code before you run it.

latestvk978fd2prvceh1rwp9tzng8rfn81z20m

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🕷️ Clawdis
Binspython3, pip

Comments