XCrawl Crawl
PassAudited by ClawScan on May 1, 2026.
Overview
This is a coherent XCrawl API wrapper, but users should notice that it uses a local API key, sends requests through an external crawl service, and returns raw crawled content.
Install only if you are comfortable giving the agent access to an XCrawl API key and sending crawl requests through XCrawl. Avoid using privileged cookies or authorization headers unless necessary, keep the API key file protected, and treat returned crawl content as untrusted data.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Anyone or any agent action with access to that key could start XCrawl requests and spend account credits.
The skill requires a local API key that authorizes XCrawl account usage and credit consumption; this is expected for the integration and is disclosed.
Before using this skill, the user must create a local config file and write `XCRAWL_API_KEY` into it. Path: `~/.xcrawl/config.json`
Store the config file securely, use a scoped or dedicated XCrawl key if available, and rotate the key if it is exposed.
Supplying session cookies, authorization headers, or sensitive webhook destinations could expose private access or crawl results to the crawl provider or callback endpoint.
The documented request surface can send cookies, headers, and webhook callbacks through the external XCrawl service; this is purpose-aligned but can carry sensitive data if the user supplies it.
`cookies` | object map | No | - | Cookie key/value pairs | ... `headers` | object map | No | - | Header key/value pairs | ... `webhook` | object | No | - | Async callback config |
Only provide cookies/headers when necessary, avoid privileged session tokens, use trusted webhook endpoints, and prefer dedicated low-privilege accounts for authenticated crawling.
Crawl results may come from connections whose certificates were not verified, and sensitive request data could be at higher risk if used with affected targets.
The skill documents a crawl request option whose default skips TLS verification, which is a weaker security posture if sensitive headers or cookies are used.
`skip_tls_verification` | boolean | No | `true` | Skip TLS verification |
Explicitly set TLS verification to a safe value when the API supports it, and avoid sending sensitive cookies or headers to sites that require relaxed TLS checks.
A crawled page could contain prompt-injection text that tries to influence later agent behavior if the output is reused directly.
Crawled web pages are untrusted content; returning them raw is expected for a crawler, but downstream agents should not treat page text as instructions.
Default behavior is raw passthrough: return upstream API response bodies as-is.
Treat crawl output as data, not instructions, and review or sanitize raw crawled content before using it in downstream agent workflows.
