Web Crawl
Analysis
This appears to be a coherent web-crawling research helper, but it can run local Python and fetch arbitrary web pages, so users should review URLs and dependencies before use.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
resp = requests.get(url, headers=self.headers, timeout=self.timeout, allow_redirects=True)
The crawler can request caller-supplied URLs and follow redirects. This is expected for a web crawler, but it means URL choice matters.
pip3 install requests beautifulsoup4
The skill documents manual installation of unpinned Python dependencies, while the registry lists no install spec. The dependencies are expected for this crawler but still affect supply-chain review.
exec:1 { "command": "cd ~/.openclaw/workspace-main/skills/web-crawl && python3 -c ... parallel_crawl(...)" }The examples show using an exec command to run the local Python crawler. This is disclosed and central to the skill, but it is broader than a scoped tool call.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
Use the crawled content to: - Extract key findings - Compare sources - Identify unique insights - Cite sources
The skill places untrusted webpage content into the agent's analysis context. That is expected for research, but pages can contain instructions or misleading text.
