Install
openclaw skills install scrapling-fetcherClawHub Security found sensitive or high-impact capabilities. Review the scan results before using.
Web scraping using Scrapling — a Python framework with anti-bot bypass (Cloudflare Turnstile, fingerprint spoofing), adaptive element tracking, stealth headless browser, and full CSS/XPath extraction. Use when web_fetch fails (Cloudflare, JS-rendered pages), or when extracting structured data from websites (prices, articles, lists). Supports HTTP, stealth, and full browser modes. Source: github.com/D4Vinci/Scrapling (PyPI: scrapling). Only use on sites you have permission to scrape.
openclaw skills install scrapling-fetcherSource: https://github.com/D4Vinci/Scrapling (open source, MIT-like license)
PyPI: scrapling — install before first use (see below)
⚠️ Only scrape sites you have permission to access. Respect
robots.txtand Terms of Service. Do not use stealth modes to bypass paywalls or access restricted content without authorization.
pip install scrapling[all]
patchright install chromium # required for stealth/dynamic modes
scrapling[all] installs patchright (a stealth fork of Playwright, bundled as a PyPI package — not a typo), curl_cffi, MCP server deps, and IPython shell.patchright install chromium downloads Chromium (~100 MB) via patchright's own installer (same mechanism as playwright install chromium).scripts/scrape.py — CLI wrapper for all three fetcher modes.
# Basic fetch (text output)
python3 ~/skills/scrapling/scripts/scrape.py <url> -q
# CSS selector extraction
python3 ~/skills/scrapling/scripts/scrape.py <url> --selector ".class" -q
# Stealth mode (Cloudflare bypass) — only on sites you're authorized to access
python3 ~/skills/scrapling/scripts/scrape.py <url> --mode stealth -q
# JSON output
python3 ~/skills/scrapling/scripts/scrape.py <url> --selector "h2" --json -q
web_fetch returns 403/429/Cloudflare challenge → use --mode stealth--mode dynamic--mode http (default)For custom logic beyond the CLI, write inline Python. See references/patterns.md for:
auto_save / adaptive — saves element fingerprints locally)scrapling mcp): starts a local network service for AI-native scraping. Only start if explicitly needed and trusted — it exposes a local HTTP server.auto_save=True: persists element fingerprints to disk for adaptive re-scraping. Creates local state in working directory.xvfb-run needed.