Nmb Scrapling
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Using this skill could cause the agent to evade website protections, violate site policies, trigger account/IP blocks, or create legal/compliance risk.
The skill explicitly instructs use of stealth fetching and Cloudflare/anti-bot bypass, rather than limiting scraping to authorized or ordinary access.
Bypass Cloudflare/anti-bot protection ... StealthyFetcher.fetch(... solve_cloudflare=True)
Only use it on sites you own or are explicitly authorized to test or scrape; require explicit approval before anti-bot bypass, stealth mode, proxy use, or high-volume scraping.
A poorly scoped task could expand into large crawls across many pages or sessions, increasing load on websites and making mistakes harder to contain.
The documented crawler can follow links, run concurrent requests, and resume from stored crawl data, but the artifacts do not define containment, rate limits, domain allowlists, or stop conditions.
Building Spiders (大规模爬取) ... concurrent_requests = 10 ... yield response.follow(next_page) ... MySpider(crawldir="./crawl_data").start()
Set strict domain/path allowlists, request limits, rate limits, and stop conditions before running crawlers; review output and crawl state before resuming.
If used with a real account, the agent may access or collect data from logged-in areas under that account's privileges.
The skill documents cookie/session reuse for logged-in pages. This is expected for scraping workflows, but it can involve authenticated account access.
保持会话(cookie复用) ... page1 = session.fetch('https://example.com/login') ... page2 = session.fetch('https://example.com/dashboard') # 已登录状态Use dedicated low-privilege accounts when possible, avoid scraping sensitive account pages, and confirm what authenticated data may be collected.
Installing unpinned packages or browser components may pull code not reviewed in these artifacts.
The installation guidance uses unpinned external package extras and a browser/install command. This is disclosed and purpose-aligned, but the artifact does not pin versions or provide provenance.
pip install "scrapling[all]" scrapling install pip install "scrapling[ai]"
Verify the Scrapling package source, pin trusted versions, and install in an isolated environment before granting broad browsing or scraping authority.
If enabled, another AI client could invoke scraping capabilities and receive scraped content through the MCP connection.
The skill documents exposing Scrapling through an MCP server to other AI clients, but does not describe authentication, permission boundaries, or data-handling limits.
MCP Server (AI集成) ... 让Claude/Cursor直接调Scrapling爬数据 ... "command": "scrapling", "args": ["mcp"]
Only enable the MCP server for trusted clients, restrict accessible targets and outputs, and avoid sending sensitive or authenticated scraping tasks through it.
