Scrape Web
PassAudited by ClawScan on May 1, 2026.
Overview
This appears to be a straightforward web-scraping tool, with the main considerations being unpinned setup commands, arbitrary URL fetching, and raw webpage output that should be treated as untrusted.
This skill looks appropriate for scraping web pages. Before installing, use a virtual environment, install dependencies from trusted sources, and only let the agent scrape URLs and write files that you explicitly intend.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A malicious or misleading webpage could include text that tries to influence the agent if the output is treated as instructions rather than scraped data.
The script returns raw webpage body content directly to stdout, which can enter the agent's context.
text = page.body ... return text ... sys.stdout.write(text)
Treat all scraped webpage content as untrusted data and avoid letting page text override the user's original request.
If invoked with an unintended URL or output path, it could fetch content the user did not mean to retrieve or overwrite a file chosen in the command.
The script accepts an arbitrary target URL and can write results to a caller-supplied output path.
parser.add_argument("--url", required=True, help="Target URL") ... page = StealthyFetcher.fetch(url, headless=True, network_idle=True) ... with open(args.out, "w", encoding="utf-8") as f:Use only user-approved URLs and safe output locations, preferably a dedicated working directory.
The installed package versions and any installer-downloaded components may change over time.
The setup instructions install unpinned Python dependencies and run Scrapling's installer.
pip install "scrapling[all]" scrapling install pip install httpx
Install in an isolated environment and consider pinning dependency versions from trusted package sources.
