ScraperAPI MCP

v1.0.2

Knowledge base for the 22 ScraperAPI MCP tools. Covers scrape, Google (search, news, jobs, shopping, maps), Amazon (product, search, offers), Walmart (search...

0· 185·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description describe a knowledge base for ScraperAPI MCP tools and the SKILL.md plus references clearly document using either the hosted MCP (via npx) or a local Python MCP server — requiring an API key and either npx or python is coherent with that purpose.
Instruction Scope
Instructions focus on selecting and calling ScraperAPI MCP tools and include crawler/callback behaviors; they do not instruct reading unrelated files or broad system state. The references explicitly warn about callbackUrl data flows (potentially sending scraped content to arbitrary endpoints), which is expected for a crawler tool but is a behavior users must consciously approve.
Install Mechanism
This is an instruction-only skill with no install spec or code files — no additional packages or downloads are performed by the skill itself, which is the lowest-risk install model.
Credentials
The skill requires SCRAPERAPI_API_KEY (primary) and also lists API_KEY. The docs explain the difference: remote hosted MCP expects SCRAPERAPI_API_KEY while the local Python server expects API_KEY and both hold the same value. Declaring both as required is slightly imprecise (one is needed depending on the variant) and the generic name API_KEY could collide with other skills — confirm which variable(s) you provide and avoid exposing unrelated secrets.
Persistence & Privilege
always:false and normal autonomous invocation are used. The skill does not request permanent platform-wide privileges or modify other skills' configs. No unusual persistence or elevated privileges are requested.
Assessment
This skill appears to be what it says: a knowledge base for using ScraperAPI's MCP tools. Before installing: 1) Confirm which MCP variant you'll use — hosted (npx) or local (python) — and set the matching env var (SCRAPERAPI_API_KEY is the primary key; API_KEY is the local-server name). The skill lists both env names as required even though only one is needed per variant; avoid putting other service secrets into a generic API_KEY variable. 2) Ensure npx or python is available depending on your chosen setup. 3) Be cautious when creating crawler jobs with callbackUrl — scraped pages (which may include sensitive content or PII) will be POSTed to any callback you set; only use endpoints you control and prefer HTTPS. 4) Review credit/cost implications for premium/ultraPremium/render options and set crawlBudget/schedules to avoid runaway usage. If you want higher assurance, request clarification from the publisher about why both env vars are required and whether any additional telemetry or network endpoints are contacted beyond the documented MCP servers.

Like a lobster shell, security has layers — review code before you run it.

latestvk9790nx4kx2jjyam2fms31x94s83zfy8

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🔍 Clawdis
Any binnpx, python
EnvSCRAPERAPI_API_KEY, API_KEY
Primary envSCRAPERAPI_API_KEY

Comments