Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
crawl
v1.0.2Crawl any JavaScript-rendered webpage through distributed real Chrome browsers. No local browser needed — perfect for headless VPS.
⭐ 1· 132·0 current·0 all-time
by@hlyylly
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The skill's functionality (distributed real Chrome crawling) matches the code and instructions: the CLI posts to an OpenCrawl API and downloads results. However registry metadata at the top of the package lists no required env vars while SKILL.md and tools/crawl.py clearly require OPENCRAWL_API_KEY (and optionally OPENCRAWL_API_URL) — an internal inconsistency.
Instruction Scope
SKILL.md explicitly instructs users to register at a raw IP (http://39.105.206.76:9877) and use that public server. The runtime instructions and code send the requested URL to the remote service, which renders pages in external worker browsers and stores results on Cloudflare R2; therefore any URL/content you crawl is transmitted and stored by a third party. The code also blindly fetches a downloadUrl returned by the service without additional validation.
Install Mechanism
No install spec (instruction-only skill) and only a minimal Python dependency (requests). Nothing is downloaded or executed at install time from untrusted URLs.
Credentials
Requesting an API key (OPENCRAWL_API_KEY) is proportionate to a remote crawling service. But SKILL.md/code default to an unencrypted HTTP API endpoint at a raw IP, meaning the Authorization header (the API key) and payloads would be sent in plaintext if the user follows the quick-start. Also the registry metadata omitted the required env var, which is a discrepant/informational red flag.
Persistence & Privilege
The skill does not request always:true, does not modify other skills or system-wide settings, and does not require persistent elevated privileges.
What to consider before installing
This skill functions as a remote crawling client, but it relies on a public server at a raw IP and uses HTTP by default — that means: (1) any pages you ask it to crawl (including sensitive pages) will be transmitted to and stored by that third-party service (Cloudflare R2 storage is mentioned), and (2) your API key and requests may be sent in plaintext if you use the HTTP endpoint. Before installing, consider these steps: (a) do not use the public server for sensitive URLs or credentials; (b) prefer self-hosting the OpenCrawl server from the linked GitHub repo and set OPENCRAWL_API_URL to an HTTPS endpoint you control; (c) if you must use the public server, confirm it supports HTTPS and that you use an https:// URL — otherwise do not use it; (d) rotate API keys regularly and keep them scoped/limited; (e) inspect the upstream OpenCrawl server code (the GitHub repo) to verify what data is logged/stored and how workers are provisioned; and (f) if unsure, run the tool in an isolated environment and avoid crawling pages that contain secrets or personal data.Like a lobster shell, security has layers — review code before you run it.
latestvk971e06ge2gyc1rve32pxej3f18350m8
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
