Scrapling Web Scraping

Zero-bot-detection web scraping for OpenClaw. Bypass Cloudflare, handle JavaScript-heavy sites, and adapt to website changes automatically. Use when you need...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
1 · 806 · 6 current installs · 6 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name, description, SKILL.md, and included wrapper script are coherent: the code simply calls a third‑party 'scrapling' package to perform basic/stealth/dynamic scraping. There are no unrelated environment variables, binaries, or claims that contradict the code.
Instruction Scope
The runtime instructions direct the user/agent to run 'pip install "scrapling[all]"' and 'scrapling install' (which downloads browsers). The skill does not instruct reading unrelated system files or exfiltrating secrets, but it explicitly advocates stealth modes that bypass anti‑bot protections — a sensitive, dual‑use capability. It also references writing custom scripts into /root/.openclaw/skills/ which is expected but worth noting.
!
Install Mechanism
No formal install spec in the skill bundle; instead the SKILL.md instructs installing a PyPI package and running 'scrapling install' to download browsers. That implicitly pulls and executes third‑party code and remote binaries from the network (source/provenance not verified in the skill). This is higher risk because the skill itself does not declare or pin where those artifacts come from or provide checksums.
Credentials
The skill requests no environment variables, credentials, or config paths in its metadata. That is proportionate to the wrapper's stated functionality. Note: some stealth/captcha‑solving features could require external solver services or keys in practice (none are declared).
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. It does suggest creating custom scripts in the skill directory (normal). There is no code that modifies other skills or global agent settings.
What to consider before installing
This skill is functionally consistent with its description: it delegates work to a third‑party 'scrapling' package and provides a small wrapper script. The main risks come from installing that external package and running 'scrapling install' (which downloads browser binaries and may execute code) — the skill bundle doesn't include an install spec, release host, or checksums. Before installing, verify the upstream project (PyPI/GitHub) and its maintainers, review the actual 'scrapling' package source and install script, confirm where browser binaries are downloaded from and whether checksums/signatures are provided, and prefer running it in an isolated environment (container or VM). Also consider legal and ethical issues: bypassing anti‑bot protections can violate terms of service or law; only use against targets you are authorized to scrape. If you need higher assurance, ask the author for the package's exact release URL, checksums, and an install manifest or provide the upstream project's verified release artifacts.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97bfq2g69q73bf5t50xjkpj8x82cdq0

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Scrapling Web Scraping

Zero-bot-detection web scraping for OpenClaw. Bypass Cloudflare, handle JavaScript-heavy sites, and adapt to website changes automatically.

Quick Start

# Install Scrapling
pip install "scrapling[all]"
scrapling install

# Basic usage
python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://example.com

# Bypass Cloudflare
python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://protected-site.com --mode stealth --cloudflare

# Extract specific data
python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://example.com --selector ".product-title"

# JavaScript-heavy sites
python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://spa-app.com --mode dynamic --wait ".content-loaded"

Usage with OpenClaw

Natural Language Commands

Basic scraping:

"用Scrapling抓取 https://example.com 的标题和所有链接"

Bypass protection:

"用隐身模式抓取 https://protected-site.com,绕过Cloudflare"

Extract data:

"抓取 https://shop.com 的商品名称和价格,CSS选择器是 .product"

Dynamic content:

"抓取 https://spa-app.com,等待 .data-loaded 元素加载完成"

Python Code

# Basic scraping
from scrapling.fetchers import Fetcher
page = Fetcher.get('https://example.com')
title = page.css('title::text').get()

# Bypass Cloudflare
from scrapling.fetchers import StealthyFetcher
page = StealthyFetcher.fetch('https://protected.com', 
                              headless=True, 
                              solve_cloudflare=True)

# JavaScript sites
from scrapling.fetchers import DynamicFetcher
page = DynamicFetcher.fetch('https://spa-app.com', 
                             headless=True, 
                             network_idle=True)

Features

FeatureCommandDescription
Basic Scrape--mode basicFast HTTP requests
Stealth Mode--mode stealthBypass Cloudflare/anti-bot
Dynamic Mode--mode dynamicHandle JavaScript sites
CSS Selectors--selector ".class"Extract specific elements
JSON Output--jsonMachine-readable output

Examples

1. Scrape with CSS Selector

python3 scrapling_tool.py https://quotes.toscrape.com --selector ".quote .text" --json

2. Bypass Cloudflare

python3 scrapling_tool.py https://nopecha.com/demo/cloudflare --mode stealth --cloudflare

3. Wait for Dynamic Content

python3 scrapling_tool.py https://spa-app.com --mode dynamic --wait ".loaded" --json

CLI Reference

python3 scrapling_tool.py URL [options]

Options:
  --mode {basic,stealth,dynamic}  Scraping mode (default: basic)
  --selector, -s CSS_SELECTOR     Extract specific elements
  --cloudflare                    Solve Cloudflare (stealth mode only)
  --wait SELECTOR                 Wait for element (dynamic mode only)
  --json, -j                      Output as JSON

Advanced: Custom Scripts

Create custom scraping scripts in /root/.openclaw/skills/scrapling-web-scraping/:

from scrapling.fetchers import StealthyFetcher

# Your custom scraper
def scrape_products(url):
    page = StealthyFetcher.fetch(url, headless=True)
    products = []
    for item in page.css('.product'):
        products.append({
            'name': item.css('.name::text').get(),
            'price': item.css('.price::text').get(),
            'link': item.css('a::attr(href)').get()
        })
    return products

Notes

  • Requires Python 3.10+
  • First run: scrapling install to download browsers
  • Respect website Terms of Service
  • Use responsibly

Created: 2026-03-05 by 老二 Source: https://github.com/D4Vinci/Scrapling

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…