Scrapling
Web scraping and data extraction using the Python Scrapling library. Use to scrape static HTML pages, JavaScript-rendered pages (Playwright), and anti-bot or...
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 1 · 239 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
Name and description match the included SKILL.md and the Python helper. The skill documents static, dynamic, and stealthy fetchers and includes a matching CLI/py script. There are no environment variables, config paths, or unrelated binaries requested that would be inconsistent with a scraping tool.
Instruction Scope
SKILL.md and the script stay within scraping scope: they instruct installing scrapling and Playwright, choosing fetchers, running the included CLI, and optionally using sessions (including a login example). The instructions do show examples that post login forms (session.post) which implies handling credentials, but the skill does not request or capture secrets itself. The doc also recommends respecting site terms and adding safety controls.
Install Mechanism
This is an instruction-only skill with no install spec; it tells users to pip install 'scrapling' and optional extras and to run Playwright installer. Installing Python packages and Playwright is expected for this functionality, but it does entail downloading and executing third-party code (PyPI packages and browser drivers), which is normal but should be reviewed before installation.
Credentials
The skill declares no required environment variables, credentials, or config paths. Example code demonstrates how to post credentials for login flows, which is appropriate for session-based scraping, but the skill itself does not request or attempt to exfiltrate secrets.
Persistence & Privilege
The skill is not always-included and allows normal autonomous invocation. It does not request permanent system-wide privileges or modify other skills' configurations. There is no install-time behavior in the bundle that persists state beyond normal use.
Assessment
This skill is coherent for web scraping, but before installing: 1) Review the third-party Python package 'scrapling' (PyPI, source repo, maintainers) to ensure it is trustworthy; 2) Be aware that pip installing extras and running Playwright will download and execute external code and browser binaries — do so in a controlled environment if unsure; 3) Scraping stealth/anti-bot protected sites can violate terms of service or laws — only use against sites you are authorized to scrape; 4) The example shows posting credentials for login flows — never supply sensitive credentials to unknown code or services; 5) Test in a sandbox/container and audit network activity if you need higher assurance.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.3
Download zipanti-botautomationcss-selectorextractionhtmllatestplaywrightscrapingstealthweb-scrapingxpath
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
Scrapling
Extract structured website data with resilient selection patterns, adaptive relocation, and the right Scrapling fetcher mode for each target.
Workflow
- Identify target type before writing code:
- Use
Fetcherfor static pages and API-like HTML responses. - Use
DynamicFetcherwhen JavaScript rendering is required. - Use
StealthyFetcherwhen anti-bot protection or browser fingerprinting issues are likely.
- Use
- Choose output contract first:
- Return JSON for pipelines/automation.
- Return Markdown/text for summarization or RAG ingestion.
- Keep stable field names even if selector strategy changes.
- Implement selectors in this order:
- Start with CSS selectors and pseudo-elements (for example
::text,::attr(href)). - Fall back to XPath for ambiguous DOM structure.
- Enable adaptive relocation for brittle or changing pages.
- Start with CSS selectors and pseudo-elements (for example
- Add safety controls:
- Respect target site terms and legal boundaries.
- Add timeouts, retries, and explicit error handling.
- Log status code, URL, and selector misses for debugging.
- Validate on at least 2 pages:
- Test one happy path and one edge case page.
- Confirm required fields are non-empty.
- Keep extraction deterministic (no hidden random choices).
Quick Setup
- Install base package:
pip install scrapling
- Install fetchers when browser-based fetching is needed:
pip install "scrapling[fetchers]"scrapling installpython3 -m playwright install(required for DynamicFetcher and StealthyFetcher)
- Install optional extras as needed:
pip install "scrapling[shell]"for shell +extractcommandspip install "scrapling[ai]"for MCP capabilities
Execution Patterns
Pattern: One-off terminal extraction
Use Scrapling CLI for fastest no-code extraction:
scrapling extract get "https://example.com" content.md --css-selector "main"
Pattern: Python extraction script
Use the bundled helper:
# Static page (default)
python scripts/extract_with_scrapling.py --url "https://example.com" --css "h1::text"
# JavaScript-rendered page
python scripts/extract_with_scrapling.py --url "https://example.com" --fetcher dynamic --css "h1::text"
# Anti-bot protected page
python scripts/extract_with_scrapling.py --url "https://example.com" --fetcher stealthy --css "h1::text"
Pattern: Session-based scraping
Use session classes when cookies/state must persist across requests.
from scrapling.fetchers import FetcherSession
session = FetcherSession()
login_page = session.post("https://example.com/login", data={"user": "...", "pass": "..."})
protected_page = session.get("https://example.com/dashboard")
headline = protected_page.css_first("h1::text")
Use StealthySession or DynamicSession as drop-in replacements for anti-bot or JS-rendered targets.
Pattern: DOM change resilience
Use auto_save=True on initial capture and retry with adaptive selection on later runs when selectors break.
from scrapling.fetchers import Fetcher
# First run: saves DOM snapshot so adaptive relocation can work later
page = Fetcher.auto_match("https://example.com", auto_save=True, disable_adaptive=False)
price = page.css_first(".price::text")
# Later runs: automatically relocates the selector even if the DOM changed
page = Fetcher.auto_match("https://example.com", auto_save=False, disable_adaptive=False)
price = page.css_first(".price::text")
References
- Use scrapling-reference.md for fetcher/API examples and selector patterns.
- Use extract_with_scrapling.py for a reusable CLI script template.
Files
4 totalSelect a file
Select a file to preview.
Comments
Loading comments…
