Scrapling MCP
ReviewAudited by ClawScan on May 1, 2026.
Overview
This is a coherent Scrapling web-scraping/MCP guide with no evidence of hidden exfiltration or destructive behavior, but it enables powerful crawling and anti-bot workflows that users should control carefully.
Install only if you need web scraping through Scrapling/MCP. Use a virtual environment, pin or review third-party dependencies, limit crawling scope and concurrency, avoid private/paywalled/personal-data scraping without authorization, and clean any session cookies or crawl checkpoints after use.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
An agent using this skill could fetch pages in stealth mode or attempt to bypass anti-bot checks if the user directs it to do so.
The MCP tool can perform anti-bot scraping. This is disclosed and aligned with the skill purpose, but it can be misused against sites without permission.
### fetch_stealthy Anti-bot fetch with Cloudflare bypass. ... "solve_cloudflare": true
Use only on sites you own or have permission to automate, and set explicit limits for targets, concurrency, delays, and bypass features.
Installing the skill’s recommended dependencies adds third-party Python and browser code to the local environment.
Setup relies on external, unpinned package and browser downloads. This is expected for the integration, but users should trust and pin the dependency source where possible.
pip install scrapling[mcp] # With browser automation pip install scrapling[mcp,playwright] python -m playwright install chromium
Install in a virtual environment, review the Scrapling package source, and pin versions if using it for repeatable or sensitive workflows.
Once configured, the agent can call a local Python-based MCP server for scraping operations.
The setup launches a Python MCP server. This is central to the skill’s purpose and user-configured, not hidden automatic execution.
"mcpServers": {
"scrapling": {
"command": "python",
"args": ["-m", "scrapling.mcp"]
}
}Configure the MCP server only in an environment where you trust the installed Scrapling package and understand which agents can invoke it.
Scraping sessions may reuse cookies or state, which could unintentionally carry a logged-in identity across requests.
Session and cookie persistence can preserve target-site state across scraping requests. This is useful for reliability but may retain sensitive browsing/session context if used on authenticated sites.
Persist sessions/cookies across requests.
Avoid using personal or sensitive authenticated sessions unless necessary, and clear session/crawl data when finished.
Data supplied to MCP calls, such as target URLs or page HTML, may be processed by the Scrapling MCP server.
The skill is designed to send scraping requests through an MCP server. This is disclosed and purpose-aligned, but it creates a tool boundary where URLs, HTML, and extraction requests are passed to another local component.
Use via mcporter (MCP) to call the `scrapling` MCP server for execution
Do not pass sensitive page contents or private URLs through the MCP server unless you trust the local environment and package.
