Scrapling Web Scraping

Zero-bot-detection web scraping for OpenClaw. Bypass Cloudflare, handle JavaScript-heavy sites, and adapt to website changes automatically. Use when you need...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 311 · 2 current installs · 2 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name, description, and included CLI wrapper (scrapling_tool.py) are consistent with a scraper that supports basic, stealth, and dynamic modes. Requesting no credentials and no special system paths is coherent for a scraping helper. However, the SKILL.md instructs users to run pip install 'scrapling[all]' and 'scrapling install' (to download browsers) even though the skill metadata provides no install spec or provenance; that mismatch is notable.
!
Instruction Scope
Runtime instructions tell the operator to pip install an external package and run its installer to download browsers and other components. The docs explicitly advertise 'bypass Cloudflare' and 'undetectable' stealth modes which are capability statements that can be used to evade protections. The SKILL.md also uses absolute /root/.openclaw paths in examples (assumes root environment) and asks users to create custom scripts in the skill directory. The instructions therefore direct network installs and possible binary downloads outside the skill bundle and contain operational guidance to bypass protections—scope beyond a simple CLI wrapper.
!
Install Mechanism
There is no declared install spec in the registry, yet SKILL.md tells users to pip install 'scrapling[all]' and run 'scrapling install' which will download browsers. Installing an external PyPI package and letting it fetch browser binaries is moderate-to-high risk because the registry metadata does not record the package sources, checksums, or URLs. The skill itself does not include those downloaded assets, so the runtime will pull code/binaries from remote endpoints under the third-party package's control.
Credentials
The skill declares no required environment variables or credentials and the included Python wrapper doesn't read secrets. That is proportionate for a scraper helper. However, stealth/Cloudflare-solve features sometimes rely on external solver services or browser automation that may require API keys or services not declared here — the absence of any declared credentials or configuration for such services is an implementation/provenance gap to verify.
Persistence & Privilege
The skill does not request 'always: true', does not declare system-wide config changes, and the code provided is a simple CLI wrapper that only calls into the scrapling package. There is no evidence in the included files that the skill persistently modifies other skills or global agent settings.
What to consider before installing
This skill appears to be a legitimate scraper wrapper, but it asks you to install and run an external Python package and to download browsers at runtime—actions that will fetch and execute code/binaries from the network. Before installing or running it: 1) Verify the upstream project (inspect the scrapling package source on PyPI/GitHub and confirm the 'scrapling install' download URLs and checksums). 2) Prefer running pip install inside an isolated environment (container/VM) to limit blast radius. 3) Audit the scrapling package dependencies and any post-install scripts for network endpoints or credential usage. 4) Consider legal/ethical implications: stealth modes that 'bypass Cloudflare' or claim to be 'undetectable' can violate terms of service or law—only use against targets you own or have explicit permission to scrape. 5) If you need to proceed, run network monitoring during the first install to see what is downloaded, and avoid running it with elevated privileges (do not run as root unless you understand and accept the risk).

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk979sc2vx0s8v190dpyd44hbh182c4b9

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Scrapling Web Scraping

Zero-bot-detection web scraping for OpenClaw. Bypass Cloudflare, handle JavaScript-heavy sites, and adapt to website changes automatically.

Quick Start

# Install Scrapling
pip install "scrapling[all]"
scrapling install

# Basic usage
python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://example.com

# Bypass Cloudflare
python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://protected-site.com --mode stealth --cloudflare

# Extract specific data
python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://example.com --selector ".product-title"

# JavaScript-heavy sites
python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://spa-app.com --mode dynamic --wait ".content-loaded"

Usage with OpenClaw

Natural Language Commands

Basic scraping:

"用Scrapling抓取 https://example.com 的标题和所有链接"

Bypass protection:

"用隐身模式抓取 https://protected-site.com,绕过Cloudflare"

Extract data:

"抓取 https://shop.com 的商品名称和价格,CSS选择器是 .product"

Dynamic content:

"抓取 https://spa-app.com,等待 .data-loaded 元素加载完成"

Python Code

# Basic scraping
from scrapling.fetchers import Fetcher
page = Fetcher.get('https://example.com')
title = page.css('title::text').get()

# Bypass Cloudflare
from scrapling.fetchers import StealthyFetcher
page = StealthyFetcher.fetch('https://protected.com', 
                              headless=True, 
                              solve_cloudflare=True)

# JavaScript sites
from scrapling.fetchers import DynamicFetcher
page = DynamicFetcher.fetch('https://spa-app.com', 
                             headless=True, 
                             network_idle=True)

Features

FeatureCommandDescription
Basic Scrape--mode basicFast HTTP requests
Stealth Mode--mode stealthBypass Cloudflare/anti-bot
Dynamic Mode--mode dynamicHandle JavaScript sites
CSS Selectors--selector ".class"Extract specific elements
JSON Output--jsonMachine-readable output

Examples

1. Scrape with CSS Selector

python3 scrapling_tool.py https://quotes.toscrape.com --selector ".quote .text" --json

2. Bypass Cloudflare

python3 scrapling_tool.py https://nopecha.com/demo/cloudflare --mode stealth --cloudflare

3. Wait for Dynamic Content

python3 scrapling_tool.py https://spa-app.com --mode dynamic --wait ".loaded" --json

CLI Reference

python3 scrapling_tool.py URL [options]

Options:
  --mode {basic,stealth,dynamic}  Scraping mode (default: basic)
  --selector, -s CSS_SELECTOR     Extract specific elements
  --cloudflare                    Solve Cloudflare (stealth mode only)
  --wait SELECTOR                 Wait for element (dynamic mode only)
  --json, -j                      Output as JSON

Advanced: Custom Scripts

Create custom scraping scripts in /root/.openclaw/skills/scrapling-web-scraping/:

from scrapling.fetchers import StealthyFetcher

# Your custom scraper
def scrape_products(url):
    page = StealthyFetcher.fetch(url, headless=True)
    products = []
    for item in page.css('.product'):
        products.append({
            'name': item.css('.name::text').get(),
            'price': item.css('.price::text').get(),
            'link': item.css('a::attr(href)').get()
        })
    return products

Notes

  • Requires Python 3.10+
  • First run: scrapling install to download browsers
  • Respect website Terms of Service
  • Use responsibly

Created: 2026-03-05 by 老二 Source: https://github.com/D4Vinci/Scrapling

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…