Ecommerce Scraper

WarnAudited by ClawScan on May 10, 2026.

Overview

This scraper’s core purpose is clear, but it includes explicit anti-bot evasion and persistent marketplace login cookies, so it should be reviewed carefully before use.

Install only if you understand the scraping and account risks. Prefer using it in an isolated environment, avoid logging into personal accounts unless necessary, delete saved cookies after use, and scrape only websites where you have permission.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Using this skill could violate site rules, trigger account or IP bans, and create compliance risk by sending deceptive automated traffic to third-party services.

Why it was flagged

The skill explicitly teaches Cloudflare bypass and hides the browser automation signal, which is bot-protection evasion rather than normal scraping.

Skill content
### 2. 绕过Cloudflare ... # 注入脚本隐藏自动化特征 ... Object.defineProperty(navigator, 'webdriver', {get: () => undefined});
Recommendation

Use only on sites where you have authorization. Remove anti-detection/bypass instructions, add clear rate limits and permission checks, and require user confirmation before scraping protected sites.

What this means

Saved cookies may let the agent act as the logged-in user, and the local cookie file could expose account sessions if mishandled or copied.

Why it was flagged

The script persists and reloads login cookies for marketplace accounts, while the registry declares no primary credential or credential handling contract.

Skill content
self.cookies_file = 'data/cookies.json' ... self.context.add_cookies(cookies) ... self._save_cookies(cookies)
Recommendation

Declare cookie/session handling in metadata, store cookies securely, scope them to specific platforms, require explicit user consent before reuse, and provide a clear delete/logout command.

What this means

A malicious or compromised target page would run with less browser sandbox protection, increasing local-environment risk during scraping.

Why it was flagged

The scraper launches Chromium with sandboxing disabled and then navigates to user-provided URLs, which weakens isolation for untrusted web pages.

Skill content
args=[ '--disable-blink-features=AutomationControlled', '--disable-dev-shm-usage', '--no-sandbox', ] ... page.goto(page_url, wait_until="networkidle", timeout=30000)
Recommendation

Avoid `--no-sandbox` by default, run scraping in a disposable container or VM, and restrict targets to trusted or explicitly authorized domains.

What this means

Users may need to install external packages and browser binaries that are not captured by the registry install metadata.

Why it was flagged

The skill has no install spec but its scripts instruct users to install Playwright and Chromium without pinned versions; this is expected for the purpose but under-declared.

Skill content
需要安装Playwright: pip install playwright && playwright install chromium
Recommendation

Add an explicit install spec with pinned package versions and document the browser dependency clearly.